00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3403 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3014 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.030 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.031 The recommended git tool is: git 00:00:00.031 using credential 00000000-0000-0000-0000-000000000002 00:00:00.032 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.055 Fetching changes from the remote Git repository 00:00:00.057 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.078 Using shallow fetch with depth 1 00:00:00.078 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.078 > git --version # timeout=10 00:00:00.119 > git --version # 'git version 2.39.2' 00:00:00.119 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.120 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.120 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.990 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.001 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.012 Checking out Revision f964f6d3463483adf05cc5c086f2abd292e05f1d (FETCH_HEAD) 00:00:04.012 > git config core.sparsecheckout # timeout=10 00:00:04.024 > git read-tree -mu HEAD # timeout=10 00:00:04.041 > git checkout -f f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=5 00:00:04.062 Commit message: "ansible/roles/custom_facts: Drop nvme features" 00:00:04.062 > git rev-list --no-walk 9a89b74058758bad3d12019ff5b47fa0c915a5eb # timeout=10 00:00:04.154 [Pipeline] Start of Pipeline 00:00:04.166 [Pipeline] library 00:00:04.167 Loading library shm_lib@master 00:00:04.168 Library shm_lib@master is cached. Copying from home. 00:00:04.184 [Pipeline] node 00:00:04.195 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.197 [Pipeline] { 00:00:04.209 [Pipeline] catchError 00:00:04.210 [Pipeline] { 00:00:04.228 [Pipeline] wrap 00:00:04.241 [Pipeline] { 00:00:04.252 [Pipeline] stage 00:00:04.254 [Pipeline] { (Prologue) 00:00:04.437 [Pipeline] sh 00:00:04.719 + logger -p user.info -t JENKINS-CI 00:00:04.738 [Pipeline] echo 00:00:04.740 Node: GP8 00:00:04.748 [Pipeline] sh 00:00:05.050 [Pipeline] setCustomBuildProperty 00:00:05.067 [Pipeline] echo 00:00:05.070 Cleanup processes 00:00:05.076 [Pipeline] sh 00:00:05.369 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.369 3539012 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.381 [Pipeline] sh 00:00:05.665 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.666 ++ grep -v 'sudo pgrep' 00:00:05.666 ++ awk '{print $1}' 00:00:05.666 + sudo kill -9 00:00:05.666 + true 00:00:05.680 [Pipeline] cleanWs 00:00:05.692 [WS-CLEANUP] Deleting project workspace... 00:00:05.692 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.700 [WS-CLEANUP] done 00:00:05.704 [Pipeline] setCustomBuildProperty 00:00:05.716 [Pipeline] sh 00:00:05.999 + sudo git config --global --replace-all safe.directory '*' 00:00:06.083 [Pipeline] nodesByLabel 00:00:06.085 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.092 [Pipeline] httpRequest 00:00:06.096 HttpMethod: GET 00:00:06.097 URL: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:06.105 Sending request to url: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:06.117 Response Code: HTTP/1.1 200 OK 00:00:06.117 Success: Status code 200 is in the accepted range: 200,404 00:00:06.117 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:07.710 [Pipeline] sh 00:00:07.993 + tar --no-same-owner -xf jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:08.013 [Pipeline] httpRequest 00:00:08.017 HttpMethod: GET 00:00:08.018 URL: http://10.211.164.96/packages/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:00:08.020 Sending request to url: http://10.211.164.96/packages/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:00:08.023 Response Code: HTTP/1.1 200 OK 00:00:08.024 Success: Status code 200 is in the accepted range: 200,404 00:00:08.024 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:00:26.145 [Pipeline] sh 00:00:26.446 + tar --no-same-owner -xf spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:00:28.995 [Pipeline] sh 00:00:29.281 + git -C spdk log --oneline -n5 00:00:29.281 8571999d8 test/scheduler: Stop moving all processes between cgroups 00:00:29.281 06472fb6d lib/idxd: fix batch size in kernel IDXD 00:00:29.281 44dcf4fb9 pkgdep/idxd: Add dependency for accel-config used in kernel IDXD 00:00:29.281 3dbaa93c1 nvmf: pass command dword 12 and 13 for write 00:00:29.281 19327fc3a bdev/nvme: use dtype/dspec for write commands 00:00:29.301 [Pipeline] withCredentials 00:00:29.313 > git --version # timeout=10 00:00:29.327 > git --version # 'git version 2.39.2' 00:00:29.346 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:29.349 [Pipeline] { 00:00:29.359 [Pipeline] retry 00:00:29.361 [Pipeline] { 00:00:29.378 [Pipeline] sh 00:00:29.666 + git ls-remote http://dpdk.org/git/dpdk main 00:00:29.679 [Pipeline] } 00:00:29.701 [Pipeline] // retry 00:00:29.706 [Pipeline] } 00:00:29.730 [Pipeline] // withCredentials 00:00:29.743 [Pipeline] httpRequest 00:00:29.749 HttpMethod: GET 00:00:29.750 URL: http://10.211.164.96/packages/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:00:29.754 Sending request to url: http://10.211.164.96/packages/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:00:29.769 Response Code: HTTP/1.1 200 OK 00:00:29.769 Success: Status code 200 is in the accepted range: 200,404 00:00:29.770 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:00:37.106 [Pipeline] sh 00:00:37.390 + tar --no-same-owner -xf dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:00:38.785 [Pipeline] sh 00:00:39.078 + git -C dpdk log --oneline -n5 00:00:39.078 7e06c0de19 examples: move alignment attribute on types for MSVC 00:00:39.078 27595cd830 drivers: move alignment attribute on types for MSVC 00:00:39.078 0efea35a2b app: move alignment attribute on types for MSVC 00:00:39.078 e2e546ab5b version: 24.07-rc0 00:00:39.078 a9778aad62 version: 24.03.0 00:00:39.091 [Pipeline] } 00:00:39.110 [Pipeline] // stage 00:00:39.117 [Pipeline] stage 00:00:39.119 [Pipeline] { (Prepare) 00:00:39.142 [Pipeline] writeFile 00:00:39.161 [Pipeline] sh 00:00:39.445 + logger -p user.info -t JENKINS-CI 00:00:39.462 [Pipeline] sh 00:00:39.749 + logger -p user.info -t JENKINS-CI 00:00:39.762 [Pipeline] sh 00:00:40.073 + cat autorun-spdk.conf 00:00:40.073 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.073 SPDK_TEST_NVMF=1 00:00:40.073 SPDK_TEST_NVME_CLI=1 00:00:40.073 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:40.073 SPDK_TEST_NVMF_NICS=e810 00:00:40.073 SPDK_TEST_VFIOUSER=1 00:00:40.073 SPDK_RUN_UBSAN=1 00:00:40.073 NET_TYPE=phy 00:00:40.073 SPDK_TEST_NATIVE_DPDK=main 00:00:40.073 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:40.081 RUN_NIGHTLY=1 00:00:40.085 [Pipeline] readFile 00:00:40.108 [Pipeline] withEnv 00:00:40.110 [Pipeline] { 00:00:40.123 [Pipeline] sh 00:00:40.404 + set -ex 00:00:40.404 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:40.404 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:40.404 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.404 ++ SPDK_TEST_NVMF=1 00:00:40.404 ++ SPDK_TEST_NVME_CLI=1 00:00:40.404 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:40.404 ++ SPDK_TEST_NVMF_NICS=e810 00:00:40.404 ++ SPDK_TEST_VFIOUSER=1 00:00:40.404 ++ SPDK_RUN_UBSAN=1 00:00:40.404 ++ NET_TYPE=phy 00:00:40.404 ++ SPDK_TEST_NATIVE_DPDK=main 00:00:40.404 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:40.404 ++ RUN_NIGHTLY=1 00:00:40.404 + case $SPDK_TEST_NVMF_NICS in 00:00:40.404 + DRIVERS=ice 00:00:40.404 + [[ tcp == \r\d\m\a ]] 00:00:40.404 + [[ -n ice ]] 00:00:40.404 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:40.404 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:40.404 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:40.404 rmmod: ERROR: Module irdma is not currently loaded 00:00:40.404 rmmod: ERROR: Module i40iw is not currently loaded 00:00:40.404 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:40.404 + true 00:00:40.404 + for D in $DRIVERS 00:00:40.404 + sudo modprobe ice 00:00:40.404 + exit 0 00:00:40.414 [Pipeline] } 00:00:40.432 [Pipeline] // withEnv 00:00:40.437 [Pipeline] } 00:00:40.454 [Pipeline] // stage 00:00:40.463 [Pipeline] catchError 00:00:40.465 [Pipeline] { 00:00:40.480 [Pipeline] timeout 00:00:40.480 Timeout set to expire in 40 min 00:00:40.482 [Pipeline] { 00:00:40.496 [Pipeline] stage 00:00:40.497 [Pipeline] { (Tests) 00:00:40.508 [Pipeline] sh 00:00:40.788 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.788 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.788 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.788 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:40.788 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:40.788 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:40.788 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:40.788 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:40.788 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:40.788 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:40.788 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.788 + source /etc/os-release 00:00:40.788 ++ NAME='Fedora Linux' 00:00:40.788 ++ VERSION='38 (Cloud Edition)' 00:00:40.788 ++ ID=fedora 00:00:40.788 ++ VERSION_ID=38 00:00:40.788 ++ VERSION_CODENAME= 00:00:40.788 ++ PLATFORM_ID=platform:f38 00:00:40.788 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:40.788 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:40.788 ++ LOGO=fedora-logo-icon 00:00:40.788 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:40.788 ++ HOME_URL=https://fedoraproject.org/ 00:00:40.788 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:40.788 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:40.788 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:40.788 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:40.788 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:40.788 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:40.788 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:40.788 ++ SUPPORT_END=2024-05-14 00:00:40.788 ++ VARIANT='Cloud Edition' 00:00:40.788 ++ VARIANT_ID=cloud 00:00:40.788 + uname -a 00:00:40.788 Linux spdk-gp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:40.788 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:41.724 Hugepages 00:00:41.724 node hugesize free / total 00:00:41.724 node0 1048576kB 0 / 0 00:00:41.724 node0 2048kB 0 / 0 00:00:41.724 node1 1048576kB 0 / 0 00:00:41.724 node1 2048kB 0 / 0 00:00:41.724 00:00:41.724 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:41.724 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:41.724 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:41.724 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:41.724 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:41.724 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:41.724 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:41.724 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:41.724 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:41.724 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:41.724 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:41.724 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:41.724 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:41.724 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:41.724 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:41.724 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:41.724 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:41.724 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:41.724 + rm -f /tmp/spdk-ld-path 00:00:41.724 + source autorun-spdk.conf 00:00:41.724 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.724 ++ SPDK_TEST_NVMF=1 00:00:41.724 ++ SPDK_TEST_NVME_CLI=1 00:00:41.724 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:41.724 ++ SPDK_TEST_NVMF_NICS=e810 00:00:41.724 ++ SPDK_TEST_VFIOUSER=1 00:00:41.724 ++ SPDK_RUN_UBSAN=1 00:00:41.724 ++ NET_TYPE=phy 00:00:41.724 ++ SPDK_TEST_NATIVE_DPDK=main 00:00:41.724 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:41.724 ++ RUN_NIGHTLY=1 00:00:41.724 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:41.724 + [[ -n '' ]] 00:00:41.724 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:41.724 + for M in /var/spdk/build-*-manifest.txt 00:00:41.724 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:41.724 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:41.724 + for M in /var/spdk/build-*-manifest.txt 00:00:41.724 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:41.724 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:41.724 ++ uname 00:00:41.724 + [[ Linux == \L\i\n\u\x ]] 00:00:41.724 + sudo dmesg -T 00:00:41.724 + sudo dmesg --clear 00:00:41.983 + dmesg_pid=3539708 00:00:41.983 + [[ Fedora Linux == FreeBSD ]] 00:00:41.983 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:41.983 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:41.983 + sudo dmesg -Tw 00:00:41.983 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:41.983 + [[ -x /usr/src/fio-static/fio ]] 00:00:41.983 + export FIO_BIN=/usr/src/fio-static/fio 00:00:41.983 + FIO_BIN=/usr/src/fio-static/fio 00:00:41.983 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:41.983 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:41.983 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:41.983 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:41.983 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:41.983 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:41.983 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:41.983 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:41.983 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:41.983 Test configuration: 00:00:41.983 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.983 SPDK_TEST_NVMF=1 00:00:41.983 SPDK_TEST_NVME_CLI=1 00:00:41.983 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:41.983 SPDK_TEST_NVMF_NICS=e810 00:00:41.983 SPDK_TEST_VFIOUSER=1 00:00:41.983 SPDK_RUN_UBSAN=1 00:00:41.983 NET_TYPE=phy 00:00:41.983 SPDK_TEST_NATIVE_DPDK=main 00:00:41.983 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:41.983 RUN_NIGHTLY=1 14:41:27 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:41.983 14:41:27 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:41.983 14:41:27 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:41.983 14:41:27 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:41.983 14:41:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.983 14:41:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.983 14:41:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.983 14:41:27 -- paths/export.sh@5 -- $ export PATH 00:00:41.983 14:41:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.983 14:41:27 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:41.983 14:41:27 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:41.983 14:41:27 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714135287.XXXXXX 00:00:41.983 14:41:27 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714135287.Yks3pO 00:00:41.983 14:41:27 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:41.983 14:41:27 -- common/autobuild_common.sh@441 -- $ '[' -n main ']' 00:00:41.983 14:41:27 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:41.983 14:41:27 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:00:41.983 14:41:27 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:41.983 14:41:27 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:41.983 14:41:27 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:41.983 14:41:27 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:41.983 14:41:27 -- common/autotest_common.sh@10 -- $ set +x 00:00:41.983 14:41:27 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:00:41.983 14:41:27 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:41.983 14:41:27 -- pm/common@17 -- $ local monitor 00:00:41.983 14:41:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.983 14:41:27 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3539744 00:00:41.983 14:41:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.983 14:41:27 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3539746 00:00:41.983 14:41:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.983 14:41:27 -- pm/common@21 -- $ date +%s 00:00:41.983 14:41:27 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3539748 00:00:41.983 14:41:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.983 14:41:27 -- pm/common@21 -- $ date +%s 00:00:41.983 14:41:27 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3539751 00:00:41.983 14:41:27 -- pm/common@21 -- $ date +%s 00:00:41.983 14:41:27 -- pm/common@26 -- $ sleep 1 00:00:41.983 14:41:27 -- pm/common@21 -- $ date +%s 00:00:41.983 14:41:27 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714135287 00:00:41.983 14:41:27 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714135287 00:00:41.983 14:41:27 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714135287 00:00:41.983 14:41:27 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714135287 00:00:41.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714135287_collect-vmstat.pm.log 00:00:41.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714135287_collect-bmc-pm.bmc.pm.log 00:00:41.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714135287_collect-cpu-load.pm.log 00:00:41.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714135287_collect-cpu-temp.pm.log 00:00:42.920 14:41:28 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:42.920 14:41:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:42.920 14:41:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:42.920 14:41:28 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:42.920 14:41:28 -- spdk/autobuild.sh@16 -- $ date -u 00:00:42.920 Fri Apr 26 12:41:28 PM UTC 2024 00:00:42.920 14:41:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:42.920 v24.05-pre-449-g8571999d8 00:00:42.920 14:41:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:42.920 14:41:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:42.920 14:41:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:42.920 14:41:28 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:42.920 14:41:28 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:42.920 14:41:28 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.179 ************************************ 00:00:43.179 START TEST ubsan 00:00:43.180 ************************************ 00:00:43.180 14:41:28 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:43.180 using ubsan 00:00:43.180 00:00:43.180 real 0m0.000s 00:00:43.180 user 0m0.000s 00:00:43.180 sys 0m0.000s 00:00:43.180 14:41:28 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:43.180 14:41:28 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.180 ************************************ 00:00:43.180 END TEST ubsan 00:00:43.180 ************************************ 00:00:43.180 14:41:28 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:00:43.180 14:41:28 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:00:43.180 14:41:28 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:00:43.180 14:41:28 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:00:43.180 14:41:28 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:43.180 14:41:28 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.180 ************************************ 00:00:43.180 START TEST build_native_dpdk 00:00:43.180 ************************************ 00:00:43.180 14:41:28 -- common/autotest_common.sh@1111 -- $ _build_native_dpdk 00:00:43.180 14:41:28 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:00:43.180 14:41:28 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:00:43.180 14:41:28 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:00:43.180 14:41:28 -- common/autobuild_common.sh@51 -- $ local compiler 00:00:43.180 14:41:28 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:00:43.180 14:41:28 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:00:43.180 14:41:28 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:00:43.180 14:41:28 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:00:43.180 14:41:28 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:00:43.180 14:41:28 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:00:43.180 14:41:28 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:00:43.180 14:41:28 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:00:43.180 14:41:28 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:00:43.180 14:41:28 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:00:43.180 14:41:28 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:43.180 14:41:28 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:43.180 14:41:28 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:43.180 14:41:28 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:00:43.180 14:41:28 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:43.180 14:41:28 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:00:43.180 7e06c0de19 examples: move alignment attribute on types for MSVC 00:00:43.180 27595cd830 drivers: move alignment attribute on types for MSVC 00:00:43.180 0efea35a2b app: move alignment attribute on types for MSVC 00:00:43.180 e2e546ab5b version: 24.07-rc0 00:00:43.180 a9778aad62 version: 24.03.0 00:00:43.180 14:41:28 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:00:43.180 14:41:28 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:00:43.180 14:41:28 -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc0 00:00:43.180 14:41:28 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:00:43.180 14:41:28 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:00:43.180 14:41:28 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:00:43.180 14:41:28 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:00:43.180 14:41:28 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:00:43.180 14:41:28 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:00:43.180 14:41:28 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:00:43.180 14:41:28 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:00:43.180 14:41:28 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:43.180 14:41:28 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:43.180 14:41:28 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:00:43.180 14:41:28 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:43.180 14:41:28 -- common/autobuild_common.sh@168 -- $ uname -s 00:00:43.180 14:41:28 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:00:43.180 14:41:28 -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc0 21.11.0 00:00:43.180 14:41:28 -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc0 '<' 21.11.0 00:00:43.180 14:41:28 -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:00:43.180 14:41:28 -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:00:43.180 14:41:28 -- scripts/common.sh@333 -- $ IFS=.-: 00:00:43.180 14:41:28 -- scripts/common.sh@333 -- $ read -ra ver1 00:00:43.180 14:41:28 -- scripts/common.sh@334 -- $ IFS=.-: 00:00:43.180 14:41:28 -- scripts/common.sh@334 -- $ read -ra ver2 00:00:43.180 14:41:28 -- scripts/common.sh@335 -- $ local 'op=<' 00:00:43.180 14:41:28 -- scripts/common.sh@337 -- $ ver1_l=4 00:00:43.180 14:41:28 -- scripts/common.sh@338 -- $ ver2_l=3 00:00:43.180 14:41:28 -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:00:43.180 14:41:28 -- scripts/common.sh@341 -- $ case "$op" in 00:00:43.180 14:41:28 -- scripts/common.sh@342 -- $ : 1 00:00:43.180 14:41:28 -- scripts/common.sh@361 -- $ (( v = 0 )) 00:00:43.180 14:41:28 -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:43.180 14:41:28 -- scripts/common.sh@362 -- $ decimal 24 00:00:43.180 14:41:28 -- scripts/common.sh@350 -- $ local d=24 00:00:43.180 14:41:28 -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:00:43.180 14:41:28 -- scripts/common.sh@352 -- $ echo 24 00:00:43.180 14:41:28 -- scripts/common.sh@362 -- $ ver1[v]=24 00:00:43.180 14:41:28 -- scripts/common.sh@363 -- $ decimal 21 00:00:43.180 14:41:28 -- scripts/common.sh@350 -- $ local d=21 00:00:43.180 14:41:28 -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:00:43.180 14:41:28 -- scripts/common.sh@352 -- $ echo 21 00:00:43.180 14:41:28 -- scripts/common.sh@363 -- $ ver2[v]=21 00:00:43.180 14:41:28 -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:43.180 14:41:28 -- scripts/common.sh@364 -- $ return 1 00:00:43.180 14:41:28 -- common/autobuild_common.sh@173 -- $ patch -p1 00:00:43.180 patching file config/rte_config.h 00:00:43.180 Hunk #1 succeeded at 70 (offset 11 lines). 00:00:43.180 14:41:28 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:00:43.180 14:41:28 -- common/autobuild_common.sh@178 -- $ uname -s 00:00:43.180 14:41:28 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:00:43.180 14:41:28 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:00:43.180 14:41:28 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:47.365 The Meson build system 00:00:47.365 Version: 1.3.1 00:00:47.365 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:47.365 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:00:47.365 Build type: native build 00:00:47.365 Program cat found: YES (/usr/bin/cat) 00:00:47.365 Project name: DPDK 00:00:47.365 Project version: 24.07.0-rc0 00:00:47.365 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:47.365 C linker for the host machine: gcc ld.bfd 2.39-16 00:00:47.365 Host machine cpu family: x86_64 00:00:47.365 Host machine cpu: x86_64 00:00:47.365 Message: ## Building in Developer Mode ## 00:00:47.365 Program pkg-config found: YES (/usr/bin/pkg-config) 00:00:47.365 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:00:47.365 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:00:47.365 Program python3 found: YES (/usr/bin/python3) 00:00:47.365 Program cat found: YES (/usr/bin/cat) 00:00:47.365 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:00:47.365 Compiler for C supports arguments -march=native: YES 00:00:47.365 Checking for size of "void *" : 8 00:00:47.365 Checking for size of "void *" : 8 (cached) 00:00:47.365 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:00:47.365 Library m found: YES 00:00:47.365 Library numa found: YES 00:00:47.365 Has header "numaif.h" : YES 00:00:47.365 Library fdt found: NO 00:00:47.365 Library execinfo found: NO 00:00:47.365 Has header "execinfo.h" : YES 00:00:47.365 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:47.365 Run-time dependency libarchive found: NO (tried pkgconfig) 00:00:47.365 Run-time dependency libbsd found: NO (tried pkgconfig) 00:00:47.365 Run-time dependency jansson found: NO (tried pkgconfig) 00:00:47.365 Run-time dependency openssl found: YES 3.0.9 00:00:47.365 Run-time dependency libpcap found: YES 1.10.4 00:00:47.365 Has header "pcap.h" with dependency libpcap: YES 00:00:47.365 Compiler for C supports arguments -Wcast-qual: YES 00:00:47.365 Compiler for C supports arguments -Wdeprecated: YES 00:00:47.365 Compiler for C supports arguments -Wformat: YES 00:00:47.366 Compiler for C supports arguments -Wformat-nonliteral: NO 00:00:47.366 Compiler for C supports arguments -Wformat-security: NO 00:00:47.366 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:47.366 Compiler for C supports arguments -Wmissing-prototypes: YES 00:00:47.366 Compiler for C supports arguments -Wnested-externs: YES 00:00:47.366 Compiler for C supports arguments -Wold-style-definition: YES 00:00:47.366 Compiler for C supports arguments -Wpointer-arith: YES 00:00:47.366 Compiler for C supports arguments -Wsign-compare: YES 00:00:47.366 Compiler for C supports arguments -Wstrict-prototypes: YES 00:00:47.366 Compiler for C supports arguments -Wundef: YES 00:00:47.366 Compiler for C supports arguments -Wwrite-strings: YES 00:00:47.366 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:00:47.366 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:00:47.366 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:47.366 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:00:47.366 Program objdump found: YES (/usr/bin/objdump) 00:00:47.366 Compiler for C supports arguments -mavx512f: YES 00:00:47.366 Checking if "AVX512 checking" compiles: YES 00:00:47.366 Fetching value of define "__SSE4_2__" : 1 00:00:47.366 Fetching value of define "__AES__" : 1 00:00:47.366 Fetching value of define "__AVX__" : 1 00:00:47.366 Fetching value of define "__AVX2__" : (undefined) 00:00:47.366 Fetching value of define "__AVX512BW__" : (undefined) 00:00:47.366 Fetching value of define "__AVX512CD__" : (undefined) 00:00:47.366 Fetching value of define "__AVX512DQ__" : (undefined) 00:00:47.366 Fetching value of define "__AVX512F__" : (undefined) 00:00:47.366 Fetching value of define "__AVX512VL__" : (undefined) 00:00:47.366 Fetching value of define "__PCLMUL__" : 1 00:00:47.366 Fetching value of define "__RDRND__" : 1 00:00:47.366 Fetching value of define "__RDSEED__" : (undefined) 00:00:47.366 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:00:47.366 Compiler for C supports arguments -Wno-format-truncation: YES 00:00:47.366 Message: lib/log: Defining dependency "log" 00:00:47.366 Message: lib/kvargs: Defining dependency "kvargs" 00:00:47.366 Message: lib/argparse: Defining dependency "argparse" 00:00:47.366 Message: lib/telemetry: Defining dependency "telemetry" 00:00:47.366 Checking for function "getentropy" : NO 00:00:47.366 Message: lib/eal: Defining dependency "eal" 00:00:47.366 Message: lib/ring: Defining dependency "ring" 00:00:47.366 Message: lib/rcu: Defining dependency "rcu" 00:00:47.366 Message: lib/mempool: Defining dependency "mempool" 00:00:47.366 Message: lib/mbuf: Defining dependency "mbuf" 00:00:47.366 Fetching value of define "__PCLMUL__" : 1 (cached) 00:00:47.366 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:47.366 Compiler for C supports arguments -mpclmul: YES 00:00:47.366 Compiler for C supports arguments -maes: YES 00:00:47.366 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:47.366 Compiler for C supports arguments -mavx512bw: YES 00:00:47.366 Compiler for C supports arguments -mavx512dq: YES 00:00:47.366 Compiler for C supports arguments -mavx512vl: YES 00:00:47.366 Compiler for C supports arguments -mvpclmulqdq: YES 00:00:47.366 Compiler for C supports arguments -mavx2: YES 00:00:47.366 Compiler for C supports arguments -mavx: YES 00:00:47.366 Message: lib/net: Defining dependency "net" 00:00:47.366 Message: lib/meter: Defining dependency "meter" 00:00:47.366 Message: lib/ethdev: Defining dependency "ethdev" 00:00:47.366 Message: lib/pci: Defining dependency "pci" 00:00:47.366 Message: lib/cmdline: Defining dependency "cmdline" 00:00:47.366 Message: lib/metrics: Defining dependency "metrics" 00:00:47.366 Message: lib/hash: Defining dependency "hash" 00:00:47.366 Message: lib/timer: Defining dependency "timer" 00:00:47.366 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:47.366 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:00:47.366 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:00:47.366 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:00:47.366 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:00:47.366 Message: lib/acl: Defining dependency "acl" 00:00:47.366 Message: lib/bbdev: Defining dependency "bbdev" 00:00:47.366 Message: lib/bitratestats: Defining dependency "bitratestats" 00:00:47.366 Run-time dependency libelf found: YES 0.190 00:00:47.366 Message: lib/bpf: Defining dependency "bpf" 00:00:47.366 Message: lib/cfgfile: Defining dependency "cfgfile" 00:00:47.366 Message: lib/compressdev: Defining dependency "compressdev" 00:00:47.366 Message: lib/cryptodev: Defining dependency "cryptodev" 00:00:47.366 Message: lib/distributor: Defining dependency "distributor" 00:00:47.366 Message: lib/dmadev: Defining dependency "dmadev" 00:00:47.366 Message: lib/efd: Defining dependency "efd" 00:00:47.366 Message: lib/eventdev: Defining dependency "eventdev" 00:00:47.366 Message: lib/dispatcher: Defining dependency "dispatcher" 00:00:47.366 Message: lib/gpudev: Defining dependency "gpudev" 00:00:47.366 Message: lib/gro: Defining dependency "gro" 00:00:47.366 Message: lib/gso: Defining dependency "gso" 00:00:47.366 Message: lib/ip_frag: Defining dependency "ip_frag" 00:00:47.366 Message: lib/jobstats: Defining dependency "jobstats" 00:00:47.366 Message: lib/latencystats: Defining dependency "latencystats" 00:00:47.366 Message: lib/lpm: Defining dependency "lpm" 00:00:47.366 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:47.366 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:00:47.366 Fetching value of define "__AVX512IFMA__" : (undefined) 00:00:47.366 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:00:47.366 Message: lib/member: Defining dependency "member" 00:00:47.366 Message: lib/pcapng: Defining dependency "pcapng" 00:00:47.366 Compiler for C supports arguments -Wno-cast-qual: YES 00:00:47.366 Message: lib/power: Defining dependency "power" 00:00:47.366 Message: lib/rawdev: Defining dependency "rawdev" 00:00:47.366 Message: lib/regexdev: Defining dependency "regexdev" 00:00:47.366 Message: lib/mldev: Defining dependency "mldev" 00:00:47.366 Message: lib/rib: Defining dependency "rib" 00:00:47.366 Message: lib/reorder: Defining dependency "reorder" 00:00:47.366 Message: lib/sched: Defining dependency "sched" 00:00:47.366 Message: lib/security: Defining dependency "security" 00:00:47.366 Message: lib/stack: Defining dependency "stack" 00:00:47.366 Has header "linux/userfaultfd.h" : YES 00:00:47.366 Has header "linux/vduse.h" : YES 00:00:47.366 Message: lib/vhost: Defining dependency "vhost" 00:00:47.366 Message: lib/ipsec: Defining dependency "ipsec" 00:00:47.366 Message: lib/pdcp: Defining dependency "pdcp" 00:00:47.366 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:47.366 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:00:47.366 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:00:47.366 Compiler for C supports arguments -mavx512bw: YES (cached) 00:00:47.366 Message: lib/fib: Defining dependency "fib" 00:00:47.366 Message: lib/port: Defining dependency "port" 00:00:47.366 Message: lib/pdump: Defining dependency "pdump" 00:00:47.366 Message: lib/table: Defining dependency "table" 00:00:47.366 Message: lib/pipeline: Defining dependency "pipeline" 00:00:47.366 Message: lib/graph: Defining dependency "graph" 00:00:47.366 Message: lib/node: Defining dependency "node" 00:00:47.366 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:00:48.744 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:00:48.744 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:00:48.744 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:00:48.744 Compiler for C supports arguments -Wno-sign-compare: YES 00:00:48.744 Compiler for C supports arguments -Wno-unused-value: YES 00:00:48.744 Compiler for C supports arguments -Wno-format: YES 00:00:48.744 Compiler for C supports arguments -Wno-format-security: YES 00:00:48.744 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:00:48.744 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:00:48.744 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:00:48.744 Compiler for C supports arguments -Wno-unused-parameter: YES 00:00:48.744 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:48.744 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:48.744 Compiler for C supports arguments -mavx512bw: YES (cached) 00:00:48.744 Compiler for C supports arguments -march=skylake-avx512: YES 00:00:48.744 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:00:48.744 Has header "sys/epoll.h" : YES 00:00:48.744 Program doxygen found: YES (/usr/bin/doxygen) 00:00:48.744 Configuring doxy-api-html.conf using configuration 00:00:48.744 Configuring doxy-api-man.conf using configuration 00:00:48.744 Program mandb found: YES (/usr/bin/mandb) 00:00:48.744 Program sphinx-build found: NO 00:00:48.744 Configuring rte_build_config.h using configuration 00:00:48.744 Message: 00:00:48.744 ================= 00:00:48.744 Applications Enabled 00:00:48.744 ================= 00:00:48.744 00:00:48.744 apps: 00:00:48.744 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:00:48.744 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:00:48.744 test-pmd, test-regex, test-sad, test-security-perf, 00:00:48.744 00:00:48.744 Message: 00:00:48.744 ================= 00:00:48.744 Libraries Enabled 00:00:48.744 ================= 00:00:48.744 00:00:48.744 libs: 00:00:48.744 log, kvargs, argparse, telemetry, eal, ring, rcu, mempool, 00:00:48.744 mbuf, net, meter, ethdev, pci, cmdline, metrics, hash, 00:00:48.744 timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, 00:00:48.744 distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, 00:00:48.744 ip_frag, jobstats, latencystats, lpm, member, pcapng, power, rawdev, 00:00:48.744 regexdev, mldev, rib, reorder, sched, security, stack, vhost, 00:00:48.744 ipsec, pdcp, fib, port, pdump, table, pipeline, graph, 00:00:48.744 node, 00:00:48.744 00:00:48.744 Message: 00:00:48.744 =============== 00:00:48.744 Drivers Enabled 00:00:48.744 =============== 00:00:48.744 00:00:48.744 common: 00:00:48.744 00:00:48.744 bus: 00:00:48.744 pci, vdev, 00:00:48.744 mempool: 00:00:48.744 ring, 00:00:48.744 dma: 00:00:48.744 00:00:48.744 net: 00:00:48.744 i40e, 00:00:48.744 raw: 00:00:48.744 00:00:48.744 crypto: 00:00:48.744 00:00:48.744 compress: 00:00:48.744 00:00:48.744 regex: 00:00:48.744 00:00:48.744 ml: 00:00:48.744 00:00:48.744 vdpa: 00:00:48.744 00:00:48.744 event: 00:00:48.744 00:00:48.744 baseband: 00:00:48.744 00:00:48.744 gpu: 00:00:48.744 00:00:48.744 00:00:48.744 Message: 00:00:48.744 ================= 00:00:48.744 Content Skipped 00:00:48.744 ================= 00:00:48.744 00:00:48.744 apps: 00:00:48.744 00:00:48.744 libs: 00:00:48.744 00:00:48.744 drivers: 00:00:48.744 common/cpt: not in enabled drivers build config 00:00:48.744 common/dpaax: not in enabled drivers build config 00:00:48.744 common/iavf: not in enabled drivers build config 00:00:48.744 common/idpf: not in enabled drivers build config 00:00:48.745 common/ionic: not in enabled drivers build config 00:00:48.745 common/mvep: not in enabled drivers build config 00:00:48.745 common/octeontx: not in enabled drivers build config 00:00:48.745 bus/auxiliary: not in enabled drivers build config 00:00:48.745 bus/cdx: not in enabled drivers build config 00:00:48.745 bus/dpaa: not in enabled drivers build config 00:00:48.745 bus/fslmc: not in enabled drivers build config 00:00:48.745 bus/ifpga: not in enabled drivers build config 00:00:48.745 bus/platform: not in enabled drivers build config 00:00:48.745 bus/uacce: not in enabled drivers build config 00:00:48.745 bus/vmbus: not in enabled drivers build config 00:00:48.745 common/cnxk: not in enabled drivers build config 00:00:48.745 common/mlx5: not in enabled drivers build config 00:00:48.745 common/nfp: not in enabled drivers build config 00:00:48.745 common/nitrox: not in enabled drivers build config 00:00:48.745 common/qat: not in enabled drivers build config 00:00:48.745 common/sfc_efx: not in enabled drivers build config 00:00:48.745 mempool/bucket: not in enabled drivers build config 00:00:48.745 mempool/cnxk: not in enabled drivers build config 00:00:48.745 mempool/dpaa: not in enabled drivers build config 00:00:48.745 mempool/dpaa2: not in enabled drivers build config 00:00:48.745 mempool/octeontx: not in enabled drivers build config 00:00:48.745 mempool/stack: not in enabled drivers build config 00:00:48.745 dma/cnxk: not in enabled drivers build config 00:00:48.745 dma/dpaa: not in enabled drivers build config 00:00:48.745 dma/dpaa2: not in enabled drivers build config 00:00:48.745 dma/hisilicon: not in enabled drivers build config 00:00:48.745 dma/idxd: not in enabled drivers build config 00:00:48.745 dma/ioat: not in enabled drivers build config 00:00:48.745 dma/skeleton: not in enabled drivers build config 00:00:48.745 net/af_packet: not in enabled drivers build config 00:00:48.745 net/af_xdp: not in enabled drivers build config 00:00:48.745 net/ark: not in enabled drivers build config 00:00:48.745 net/atlantic: not in enabled drivers build config 00:00:48.745 net/avp: not in enabled drivers build config 00:00:48.745 net/axgbe: not in enabled drivers build config 00:00:48.745 net/bnx2x: not in enabled drivers build config 00:00:48.745 net/bnxt: not in enabled drivers build config 00:00:48.745 net/bonding: not in enabled drivers build config 00:00:48.745 net/cnxk: not in enabled drivers build config 00:00:48.745 net/cpfl: not in enabled drivers build config 00:00:48.745 net/cxgbe: not in enabled drivers build config 00:00:48.745 net/dpaa: not in enabled drivers build config 00:00:48.745 net/dpaa2: not in enabled drivers build config 00:00:48.745 net/e1000: not in enabled drivers build config 00:00:48.745 net/ena: not in enabled drivers build config 00:00:48.745 net/enetc: not in enabled drivers build config 00:00:48.745 net/enetfec: not in enabled drivers build config 00:00:48.745 net/enic: not in enabled drivers build config 00:00:48.745 net/failsafe: not in enabled drivers build config 00:00:48.745 net/fm10k: not in enabled drivers build config 00:00:48.745 net/gve: not in enabled drivers build config 00:00:48.745 net/hinic: not in enabled drivers build config 00:00:48.745 net/hns3: not in enabled drivers build config 00:00:48.745 net/iavf: not in enabled drivers build config 00:00:48.745 net/ice: not in enabled drivers build config 00:00:48.745 net/idpf: not in enabled drivers build config 00:00:48.745 net/igc: not in enabled drivers build config 00:00:48.745 net/ionic: not in enabled drivers build config 00:00:48.745 net/ipn3ke: not in enabled drivers build config 00:00:48.745 net/ixgbe: not in enabled drivers build config 00:00:48.745 net/mana: not in enabled drivers build config 00:00:48.745 net/memif: not in enabled drivers build config 00:00:48.745 net/mlx4: not in enabled drivers build config 00:00:48.745 net/mlx5: not in enabled drivers build config 00:00:48.745 net/mvneta: not in enabled drivers build config 00:00:48.745 net/mvpp2: not in enabled drivers build config 00:00:48.745 net/netvsc: not in enabled drivers build config 00:00:48.745 net/nfb: not in enabled drivers build config 00:00:48.745 net/nfp: not in enabled drivers build config 00:00:48.745 net/ngbe: not in enabled drivers build config 00:00:48.745 net/null: not in enabled drivers build config 00:00:48.745 net/octeontx: not in enabled drivers build config 00:00:48.745 net/octeon_ep: not in enabled drivers build config 00:00:48.745 net/pcap: not in enabled drivers build config 00:00:48.745 net/pfe: not in enabled drivers build config 00:00:48.745 net/qede: not in enabled drivers build config 00:00:48.745 net/ring: not in enabled drivers build config 00:00:48.745 net/sfc: not in enabled drivers build config 00:00:48.745 net/softnic: not in enabled drivers build config 00:00:48.745 net/tap: not in enabled drivers build config 00:00:48.745 net/thunderx: not in enabled drivers build config 00:00:48.745 net/txgbe: not in enabled drivers build config 00:00:48.745 net/vdev_netvsc: not in enabled drivers build config 00:00:48.745 net/vhost: not in enabled drivers build config 00:00:48.745 net/virtio: not in enabled drivers build config 00:00:48.745 net/vmxnet3: not in enabled drivers build config 00:00:48.745 raw/cnxk_bphy: not in enabled drivers build config 00:00:48.745 raw/cnxk_gpio: not in enabled drivers build config 00:00:48.745 raw/dpaa2_cmdif: not in enabled drivers build config 00:00:48.745 raw/ifpga: not in enabled drivers build config 00:00:48.745 raw/ntb: not in enabled drivers build config 00:00:48.745 raw/skeleton: not in enabled drivers build config 00:00:48.745 crypto/armv8: not in enabled drivers build config 00:00:48.745 crypto/bcmfs: not in enabled drivers build config 00:00:48.745 crypto/caam_jr: not in enabled drivers build config 00:00:48.745 crypto/ccp: not in enabled drivers build config 00:00:48.745 crypto/cnxk: not in enabled drivers build config 00:00:48.745 crypto/dpaa_sec: not in enabled drivers build config 00:00:48.745 crypto/dpaa2_sec: not in enabled drivers build config 00:00:48.745 crypto/ipsec_mb: not in enabled drivers build config 00:00:48.745 crypto/mlx5: not in enabled drivers build config 00:00:48.745 crypto/mvsam: not in enabled drivers build config 00:00:48.745 crypto/nitrox: not in enabled drivers build config 00:00:48.745 crypto/null: not in enabled drivers build config 00:00:48.745 crypto/octeontx: not in enabled drivers build config 00:00:48.745 crypto/openssl: not in enabled drivers build config 00:00:48.745 crypto/scheduler: not in enabled drivers build config 00:00:48.745 crypto/uadk: not in enabled drivers build config 00:00:48.745 crypto/virtio: not in enabled drivers build config 00:00:48.745 compress/isal: not in enabled drivers build config 00:00:48.745 compress/mlx5: not in enabled drivers build config 00:00:48.745 compress/nitrox: not in enabled drivers build config 00:00:48.745 compress/octeontx: not in enabled drivers build config 00:00:48.745 compress/zlib: not in enabled drivers build config 00:00:48.745 regex/mlx5: not in enabled drivers build config 00:00:48.745 regex/cn9k: not in enabled drivers build config 00:00:48.745 ml/cnxk: not in enabled drivers build config 00:00:48.745 vdpa/ifc: not in enabled drivers build config 00:00:48.745 vdpa/mlx5: not in enabled drivers build config 00:00:48.745 vdpa/nfp: not in enabled drivers build config 00:00:48.745 vdpa/sfc: not in enabled drivers build config 00:00:48.745 event/cnxk: not in enabled drivers build config 00:00:48.745 event/dlb2: not in enabled drivers build config 00:00:48.745 event/dpaa: not in enabled drivers build config 00:00:48.745 event/dpaa2: not in enabled drivers build config 00:00:48.745 event/dsw: not in enabled drivers build config 00:00:48.745 event/opdl: not in enabled drivers build config 00:00:48.745 event/skeleton: not in enabled drivers build config 00:00:48.745 event/sw: not in enabled drivers build config 00:00:48.745 event/octeontx: not in enabled drivers build config 00:00:48.745 baseband/acc: not in enabled drivers build config 00:00:48.745 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:00:48.745 baseband/fpga_lte_fec: not in enabled drivers build config 00:00:48.745 baseband/la12xx: not in enabled drivers build config 00:00:48.745 baseband/null: not in enabled drivers build config 00:00:48.745 baseband/turbo_sw: not in enabled drivers build config 00:00:48.745 gpu/cuda: not in enabled drivers build config 00:00:48.745 00:00:48.745 00:00:48.745 Build targets in project: 224 00:00:48.745 00:00:48.745 DPDK 24.07.0-rc0 00:00:48.745 00:00:48.745 User defined options 00:00:48.745 libdir : lib 00:00:48.745 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:48.745 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:00:48.745 c_link_args : 00:00:48.745 enable_docs : false 00:00:48.745 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:48.745 enable_kmods : false 00:00:48.745 machine : native 00:00:48.745 tests : false 00:00:48.745 00:00:48.745 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:48.745 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:00:48.745 14:41:34 -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:00:48.745 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:00:48.745 [1/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:00:48.745 [2/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:00:48.745 [3/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:00:48.745 [4/722] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:00:48.745 [5/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:00:48.745 [6/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:00:48.745 [7/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:00:48.745 [8/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:00:48.745 [9/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:00:48.745 [10/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:00:48.745 [11/722] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:00:48.745 [12/722] Linking static target lib/librte_kvargs.a 00:00:48.745 [13/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:00:49.008 [14/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:00:49.008 [15/722] Compiling C object lib/librte_log.a.p/log_log.c.o 00:00:49.008 [16/722] Linking static target lib/librte_log.a 00:00:49.268 [17/722] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:00:49.268 [18/722] Linking static target lib/librte_argparse.a 00:00:49.268 [19/722] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.532 [20/722] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.796 [21/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:00:49.796 [22/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:00:49.796 [23/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:00:49.796 [24/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:00:49.796 [25/722] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.796 [26/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:00:49.796 [27/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:00:49.796 [28/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:00:49.796 [29/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:00:49.796 [30/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:00:49.796 [31/722] Linking target lib/librte_log.so.24.2 00:00:49.796 [32/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:00:49.796 [33/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:00:49.796 [34/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:00:49.796 [35/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:00:49.796 [36/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:00:49.796 [37/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:00:49.796 [38/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:00:49.796 [39/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:00:49.796 [40/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:00:49.796 [41/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:00:49.796 [42/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:00:49.796 [43/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:00:49.796 [44/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:00:49.796 [45/722] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:00:49.796 [46/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:00:50.063 [47/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:00:50.063 [48/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:00:50.063 [49/722] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:00:50.063 [50/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:00:50.063 [51/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:00:50.063 [52/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:00:50.063 [53/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:00:50.063 [54/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:00:50.063 [55/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:00:50.063 [56/722] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:00:50.063 [57/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:00:50.063 [58/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:00:50.063 [59/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:00:50.063 [60/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:00:50.063 [61/722] Linking target lib/librte_kvargs.so.24.2 00:00:50.063 [62/722] Linking target lib/librte_argparse.so.24.2 00:00:50.321 [63/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:00:50.321 [64/722] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:00:50.321 [65/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:00:50.321 [66/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:00:50.582 [67/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:00:50.582 [68/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:00:50.582 [69/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:00:50.582 [70/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:00:50.582 [71/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:00:50.582 [72/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:00:50.848 [73/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:00:50.848 [74/722] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:00:50.848 [75/722] Linking static target lib/librte_pci.a 00:00:50.848 [76/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:00:50.848 [77/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:00:50.848 [78/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:00:50.848 [79/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:00:51.109 [80/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:00:51.109 [81/722] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:00:51.110 [82/722] Linking static target lib/net/libnet_crc_avx512_lib.a 00:00:51.110 [83/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:00:51.110 [84/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:00:51.110 [85/722] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:00:51.110 [86/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:00:51.110 [87/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:00:51.110 [88/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:00:51.110 [89/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:00:51.110 [90/722] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:00:51.110 [91/722] Linking static target lib/librte_ring.a 00:00:51.110 [92/722] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.110 [93/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:00:51.110 [94/722] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:00:51.110 [95/722] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:00:51.110 [96/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:00:51.110 [97/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:00:51.110 [98/722] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:00:51.110 [99/722] Linking static target lib/librte_meter.a 00:00:51.110 [100/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:00:51.110 [101/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:00:51.110 [102/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:00:51.376 [103/722] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:00:51.376 [104/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:00:51.376 [105/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:00:51.376 [106/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:00:51.376 [107/722] Linking static target lib/librte_telemetry.a 00:00:51.376 [108/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:00:51.376 [109/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:00:51.376 [110/722] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:00:51.376 [111/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:00:51.376 [112/722] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:00:51.376 [113/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:00:51.376 [114/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:00:51.639 [115/722] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.639 [116/722] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.639 [117/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:00:51.639 [118/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:00:51.639 [119/722] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:00:51.639 [120/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:00:51.639 [121/722] Linking static target lib/librte_net.a 00:00:51.639 [122/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:00:51.639 [123/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:00:51.903 [124/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:00:51.903 [125/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:00:51.903 [126/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:00:51.903 [127/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:00:51.903 [128/722] Linking static target lib/librte_mempool.a 00:00:52.165 [129/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:00:52.165 [130/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:00:52.165 [131/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:00:52.165 [132/722] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.165 [133/722] Linking static target lib/librte_eal.a 00:00:52.165 [134/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:00:52.165 [135/722] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.165 [136/722] Linking target lib/librte_telemetry.so.24.2 00:00:52.165 [137/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:00:52.165 [138/722] Linking static target lib/librte_cmdline.a 00:00:52.165 [139/722] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:00:52.165 [140/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:00:52.165 [141/722] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:00:52.428 [142/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:00:52.428 [143/722] Linking static target lib/librte_cfgfile.a 00:00:52.428 [144/722] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:00:52.428 [145/722] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:00:52.428 [146/722] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:00:52.428 [147/722] Linking static target lib/librte_metrics.a 00:00:52.428 [148/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:00:52.428 [149/722] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:00:52.428 [150/722] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:00:52.690 [151/722] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:00:52.690 [152/722] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:00:52.690 [153/722] Linking static target lib/librte_rcu.a 00:00:52.690 [154/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:00:52.690 [155/722] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:00:52.690 [156/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:00:52.690 [157/722] Linking static target lib/librte_bitratestats.a 00:00:52.690 [158/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:00:52.690 [159/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:00:52.958 [160/722] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.958 [161/722] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.958 [162/722] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:00:52.958 [163/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:00:52.958 [164/722] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:00:52.958 [165/722] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:00:52.958 [166/722] Linking static target lib/librte_timer.a 00:00:52.958 [167/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:00:53.221 [168/722] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.221 [169/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:00:53.221 [170/722] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.221 [171/722] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.221 [172/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:00:53.221 [173/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:00:53.221 [174/722] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:00:53.221 [175/722] Linking static target lib/librte_bbdev.a 00:00:53.516 [176/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:00:53.516 [177/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:00:53.516 [178/722] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.516 [179/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:00:53.516 [180/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:00:53.516 [181/722] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.516 [182/722] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:00:53.779 [183/722] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:00:53.779 [184/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:00:53.779 [185/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:00:53.779 [186/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:00:53.779 [187/722] Linking static target lib/librte_compressdev.a 00:00:54.039 [188/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:00:54.039 [189/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:00:54.039 [190/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:00:54.302 [191/722] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:00:54.302 [192/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:00:54.302 [193/722] Linking static target lib/librte_distributor.a 00:00:54.302 [194/722] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:00:54.302 [195/722] Linking static target lib/librte_dmadev.a 00:00:54.302 [196/722] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.302 [197/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:00:54.302 [198/722] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:00:54.302 [199/722] Linking static target lib/librte_bpf.a 00:00:54.570 [200/722] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:00:54.570 [201/722] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:00:54.570 [202/722] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:00:54.570 [203/722] Linking static target lib/librte_dispatcher.a 00:00:54.570 [204/722] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:00:54.570 [205/722] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:00:54.570 [206/722] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:00:54.570 [207/722] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:00:54.830 [208/722] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.830 [209/722] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:00:54.830 [210/722] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.830 [211/722] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:00:54.830 [212/722] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:00:54.830 [213/722] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:00:54.830 [214/722] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:00:54.830 [215/722] Linking static target lib/librte_gpudev.a 00:00:54.830 [216/722] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:00:54.830 [217/722] Linking static target lib/librte_gro.a 00:00:54.830 [218/722] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:00:54.830 [219/722] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:00:54.830 [220/722] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:00:54.830 [221/722] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.830 [222/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:00:55.094 [223/722] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:00:55.094 [224/722] Linking static target lib/librte_jobstats.a 00:00:55.094 [225/722] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:00:55.094 [226/722] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.094 [227/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:00:55.357 [228/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:00:55.357 [229/722] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.357 [230/722] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.357 [231/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:00:55.617 [232/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:00:55.617 [233/722] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.617 [234/722] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:00:55.617 [235/722] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:00:55.617 [236/722] Linking static target lib/librte_latencystats.a 00:00:55.617 [237/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:00:55.617 [238/722] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:00:55.617 [239/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:00:55.617 [240/722] Linking static target lib/member/libsketch_avx512_tmp.a 00:00:55.617 [241/722] Linking static target lib/librte_ip_frag.a 00:00:55.617 [242/722] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:00:55.879 [243/722] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:00:55.879 [244/722] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:00:55.879 [245/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:00:55.879 [246/722] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.879 [247/722] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:00:56.140 [248/722] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:00:56.140 [249/722] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.140 [250/722] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:00:56.140 [251/722] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:00:56.140 [252/722] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.140 [253/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:00:56.140 [254/722] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:00:56.140 [255/722] Linking static target lib/librte_gso.a 00:00:56.400 [256/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:00:56.400 [257/722] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:00:56.400 [258/722] Linking static target lib/librte_regexdev.a 00:00:56.664 [259/722] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:00:56.664 [260/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:00:56.664 [261/722] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.664 [262/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:00:56.664 [263/722] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:00:56.664 [264/722] Linking static target lib/librte_rawdev.a 00:00:56.664 [265/722] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:00:56.664 [266/722] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:00:56.664 [267/722] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:00:56.664 [268/722] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:00:56.929 [269/722] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:00:56.929 [270/722] Linking static target lib/librte_efd.a 00:00:56.929 [271/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:00:56.929 [272/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:00:56.929 [273/722] Linking static target lib/librte_mldev.a 00:00:56.929 [274/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:00:56.930 [275/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:00:56.930 [276/722] Linking static target lib/librte_stack.a 00:00:56.930 [277/722] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:00:56.930 [278/722] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:00:56.930 [279/722] Linking static target lib/librte_pcapng.a 00:00:57.189 [280/722] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:00:57.189 [281/722] Linking static target lib/librte_lpm.a 00:00:57.189 [282/722] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:00:57.189 [283/722] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:00:57.189 [284/722] Linking static target lib/acl/libavx2_tmp.a 00:00:57.189 [285/722] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.189 [286/722] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:00:57.189 [287/722] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:00:57.189 [288/722] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.189 [289/722] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:00:57.449 [290/722] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:00:57.449 [291/722] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.449 [292/722] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:00:57.449 [293/722] Linking static target lib/librte_hash.a 00:00:57.449 [294/722] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:00:57.449 [295/722] Linking static target lib/librte_reorder.a 00:00:57.449 [296/722] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:00:57.449 [297/722] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.449 [298/722] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:00:57.449 [299/722] Linking static target lib/acl/libavx512_tmp.a 00:00:57.449 [300/722] Linking static target lib/librte_acl.a 00:00:57.449 [301/722] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:00:57.720 [302/722] Linking static target lib/librte_power.a 00:00:57.720 [303/722] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:00:57.720 [304/722] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:00:57.720 [305/722] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.720 [306/722] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:00:57.720 [307/722] Linking static target lib/librte_security.a 00:00:57.720 [308/722] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.720 [309/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:00:57.720 [310/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:00:57.720 [311/722] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:00:57.720 [312/722] Linking static target lib/librte_mbuf.a 00:00:57.981 [313/722] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.981 [314/722] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:00:57.981 [315/722] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:00:57.981 [316/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:00:57.981 [317/722] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:00:57.981 [318/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:00:57.981 [319/722] Linking static target lib/librte_rib.a 00:00:57.981 [320/722] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.250 [321/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:00:58.250 [322/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:00:58.250 [323/722] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:00:58.250 [324/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:00:58.250 [325/722] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.510 [326/722] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.510 [327/722] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:00:58.510 [328/722] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:00:58.510 [329/722] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:00:58.510 [330/722] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:00:58.510 [331/722] Linking static target lib/fib/libtrie_avx512_tmp.a 00:00:58.510 [332/722] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:00:58.510 [333/722] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:00:58.510 [334/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:00:58.510 [335/722] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.777 [336/722] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.777 [337/722] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.777 [338/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:00:59.036 [339/722] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:00:59.036 [340/722] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:00:59.036 [341/722] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:00:59.296 [342/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:00:59.296 [343/722] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:00:59.296 [344/722] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.296 [345/722] Linking static target lib/librte_member.a 00:00:59.296 [346/722] Linking static target lib/librte_eventdev.a 00:00:59.296 [347/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:00:59.296 [348/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:00:59.558 [349/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:00:59.558 [350/722] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:00:59.558 [351/722] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:00:59.558 [352/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:00:59.558 [353/722] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:00:59.558 [354/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:00:59.558 [355/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:00:59.824 [356/722] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:00:59.824 [357/722] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:00:59.824 [358/722] Linking static target lib/librte_ethdev.a 00:00:59.824 [359/722] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.824 [360/722] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:00:59.824 [361/722] Linking static target lib/librte_sched.a 00:00:59.824 [362/722] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:00:59.824 [363/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:00:59.824 [364/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:00:59.824 [365/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:00:59.824 [366/722] Linking static target lib/librte_cryptodev.a 00:00:59.824 [367/722] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:00:59.824 [368/722] Linking static target lib/librte_fib.a 00:01:00.090 [369/722] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:00.090 [370/722] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:00.090 [371/722] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:00.090 [372/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:00.090 [373/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:00.090 [374/722] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:00.351 [375/722] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:00.351 [376/722] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:00.351 [377/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:00.351 [378/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:00.351 [379/722] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.351 [380/722] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:00.615 [381/722] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.615 [382/722] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:00.615 [383/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:00.615 [384/722] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:00.615 [385/722] Linking static target lib/librte_pdump.a 00:01:00.615 [386/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:00.876 [387/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:00.876 [388/722] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:00.876 [389/722] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:00.876 [390/722] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:01.140 [391/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:01.140 [392/722] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:01.140 [393/722] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:01.140 [394/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:01.140 [395/722] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:01.140 [396/722] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.140 [397/722] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:01.140 [398/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:01.140 [399/722] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:01.405 [400/722] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:01.405 [401/722] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:01.405 [402/722] Linking static target lib/librte_ipsec.a 00:01:01.405 [403/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:01.405 [404/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:01.405 [405/722] Linking static target lib/librte_table.a 00:01:01.667 [406/722] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:01.667 [407/722] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:01.667 [408/722] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:01.943 [409/722] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.943 [410/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:01.943 [411/722] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.206 [412/722] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:02.207 [413/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:02.207 [414/722] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:02.207 [415/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:02.207 [416/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:02.473 [417/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:02.473 [418/722] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:02.473 [419/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:02.473 [420/722] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:02.473 [421/722] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:02.473 [422/722] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:02.736 [423/722] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.736 [424/722] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.736 [425/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:02.736 [426/722] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:01:02.736 [427/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:02.736 [428/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:02.736 [429/722] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:03.000 [430/722] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:03.000 [431/722] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:03.000 [432/722] Linking static target drivers/librte_bus_vdev.a 00:01:03.000 [433/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:03.000 [434/722] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:03.264 [435/722] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:03.264 [436/722] Linking static target lib/librte_port.a 00:01:03.264 [437/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:03.264 [438/722] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:03.264 [439/722] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:03.264 [440/722] Linking static target drivers/librte_bus_pci.a 00:01:03.264 [441/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:03.264 [442/722] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:03.264 [443/722] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:03.264 [444/722] Linking static target lib/librte_graph.a 00:01:03.264 [445/722] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:03.264 [446/722] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.527 [447/722] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.527 [448/722] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:03.527 [449/722] Linking target lib/librte_eal.so.24.2 00:01:03.791 [450/722] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:03.791 [451/722] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:03.791 [452/722] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:01:03.791 [453/722] Linking target lib/librte_ring.so.24.2 00:01:03.791 [454/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:04.054 [455/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:04.054 [456/722] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:04.054 [457/722] Linking target lib/librte_meter.so.24.2 00:01:04.054 [458/722] Linking target lib/librte_pci.so.24.2 00:01:04.054 [459/722] Linking target lib/librte_timer.so.24.2 00:01:04.054 [460/722] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:04.054 [461/722] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.054 [462/722] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.054 [463/722] Linking target lib/librte_cfgfile.so.24.2 00:01:04.054 [464/722] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:04.054 [465/722] Linking target lib/librte_acl.so.24.2 00:01:04.054 [466/722] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:01:04.054 [467/722] Linking target lib/librte_dmadev.so.24.2 00:01:04.321 [468/722] Linking target lib/librte_jobstats.so.24.2 00:01:04.321 [469/722] Linking target lib/librte_rawdev.so.24.2 00:01:04.321 [470/722] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:01:04.321 [471/722] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:01:04.321 [472/722] Linking target lib/librte_rcu.so.24.2 00:01:04.321 [473/722] Linking target lib/librte_mempool.so.24.2 00:01:04.321 [474/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:04.321 [475/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:04.321 [476/722] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:01:04.321 [477/722] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:04.321 [478/722] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:04.321 [479/722] Linking target lib/librte_stack.so.24.2 00:01:04.321 [480/722] Linking static target drivers/librte_mempool_ring.a 00:01:04.321 [481/722] Linking target drivers/librte_bus_vdev.so.24.2 00:01:04.321 [482/722] Linking target drivers/librte_bus_pci.so.24.2 00:01:04.321 [483/722] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.321 [484/722] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:04.321 [485/722] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:04.321 [486/722] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:01:04.583 [487/722] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:01:04.583 [488/722] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:01:04.583 [489/722] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:01:04.583 [490/722] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:04.583 [491/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:04.583 [492/722] Linking target lib/librte_rib.so.24.2 00:01:04.583 [493/722] Linking target lib/librte_mbuf.so.24.2 00:01:04.583 [494/722] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:04.583 [495/722] Linking target drivers/librte_mempool_ring.so.24.2 00:01:04.584 [496/722] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:01:04.584 [497/722] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:01:04.584 [498/722] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:04.584 [499/722] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:04.584 [500/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:04.845 [501/722] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:04.845 [502/722] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:04.845 [503/722] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:01:04.845 [504/722] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:01:04.845 [505/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:04.845 [506/722] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:01:04.845 [507/722] Linking target lib/librte_net.so.24.2 00:01:04.845 [508/722] Linking target lib/librte_bbdev.so.24.2 00:01:04.845 [509/722] Linking target lib/librte_compressdev.so.24.2 00:01:05.108 [510/722] Linking target lib/librte_distributor.so.24.2 00:01:05.108 [511/722] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:05.108 [512/722] Linking target lib/librte_cryptodev.so.24.2 00:01:05.108 [513/722] Linking target lib/librte_gpudev.so.24.2 00:01:05.108 [514/722] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:05.108 [515/722] Linking target lib/librte_regexdev.so.24.2 00:01:05.108 [516/722] Linking target lib/librte_mldev.so.24.2 00:01:05.108 [517/722] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:05.108 [518/722] Linking target lib/librte_reorder.so.24.2 00:01:05.108 [519/722] Linking target lib/librte_sched.so.24.2 00:01:05.108 [520/722] Linking target lib/librte_fib.so.24.2 00:01:05.108 [521/722] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:01:05.369 [522/722] Linking target lib/librte_cmdline.so.24.2 00:01:05.369 [523/722] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:01:05.369 [524/722] Linking target lib/librte_security.so.24.2 00:01:05.369 [525/722] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:01:05.369 [526/722] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:01:05.369 [527/722] Linking target lib/librte_hash.so.24.2 00:01:05.369 [528/722] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:05.369 [529/722] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:05.632 [530/722] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:01:05.632 [531/722] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:01:05.632 [532/722] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:05.632 [533/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:05.632 [534/722] Linking target lib/librte_lpm.so.24.2 00:01:05.632 [535/722] Linking target lib/librte_efd.so.24.2 00:01:05.632 [536/722] Linking target lib/librte_member.so.24.2 00:01:05.632 [537/722] Linking target lib/librte_ipsec.so.24.2 00:01:05.632 [538/722] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:05.632 [539/722] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:05.632 [540/722] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:05.896 [541/722] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:05.896 [542/722] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:05.896 [543/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:05.896 [544/722] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:01:05.896 [545/722] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:01:05.896 [546/722] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:05.896 [547/722] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:06.181 [548/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:06.181 [549/722] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:06.181 [550/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:06.181 [551/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:06.181 [552/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:06.181 [553/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:06.181 [554/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:06.467 [555/722] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:06.731 [556/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:06.731 [557/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:06.731 [558/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:06.731 [559/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:06.731 [560/722] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:06.991 [561/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:06.991 [562/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:06.991 [563/722] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:06.991 [564/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:06.991 [565/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:07.253 [566/722] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:07.253 [567/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:07.253 [568/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:07.253 [569/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:07.514 [570/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:07.514 [571/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:07.779 [572/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:07.779 [573/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:08.041 [574/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:08.041 [575/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:08.041 [576/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:08.041 [577/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:08.041 [578/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:08.299 [579/722] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:08.299 [580/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:08.299 [581/722] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.300 [582/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:08.300 [583/722] Linking target lib/librte_ethdev.so.24.2 00:01:08.300 [584/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:08.565 [585/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:08.565 [586/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:08.565 [587/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:08.565 [588/722] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:08.826 [589/722] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:01:08.826 [590/722] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:08.826 [591/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:08.826 [592/722] Linking target lib/librte_bpf.so.24.2 00:01:08.826 [593/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:08.826 [594/722] Linking target lib/librte_metrics.so.24.2 00:01:08.826 [595/722] Linking target lib/librte_eventdev.so.24.2 00:01:08.826 [596/722] Linking target lib/librte_gro.so.24.2 00:01:08.826 [597/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:08.826 [598/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:08.826 [599/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:09.089 [600/722] Linking target lib/librte_gso.so.24.2 00:01:09.089 [601/722] Linking target lib/librte_ip_frag.so.24.2 00:01:09.089 [602/722] Linking static target lib/librte_pdcp.a 00:01:09.089 [603/722] Linking target lib/librte_pcapng.so.24.2 00:01:09.089 [604/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:09.089 [605/722] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:01:09.089 [606/722] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:01:09.089 [607/722] Linking target lib/librte_power.so.24.2 00:01:09.089 [608/722] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:01:09.089 [609/722] Linking target lib/librte_bitratestats.so.24.2 00:01:09.089 [610/722] Linking target lib/librte_latencystats.so.24.2 00:01:09.089 [611/722] Linking target lib/librte_dispatcher.so.24.2 00:01:09.353 [612/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:09.353 [613/722] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:01:09.353 [614/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:09.353 [615/722] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:01:09.353 [616/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:09.353 [617/722] Linking target lib/librte_pdump.so.24.2 00:01:09.353 [618/722] Linking target lib/librte_graph.so.24.2 00:01:09.353 [619/722] Linking target lib/librte_port.so.24.2 00:01:09.353 [620/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:09.353 [621/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:09.616 [622/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:09.616 [623/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:09.616 [624/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:09.616 [625/722] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.616 [626/722] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:01:09.616 [627/722] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:09.616 [628/722] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:01:09.616 [629/722] Linking target lib/librte_pdcp.so.24.2 00:01:09.616 [630/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:09.616 [631/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:09.878 [632/722] Linking target lib/librte_table.so.24.2 00:01:09.878 [633/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:09.878 [634/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:09.878 [635/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:09.878 [636/722] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:10.141 [637/722] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:01:10.142 [638/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:10.142 [639/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:10.403 [640/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:10.403 [641/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:10.662 [642/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:10.662 [643/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:10.662 [644/722] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:10.662 [645/722] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:10.662 [646/722] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:10.920 [647/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:10.920 [648/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:10.920 [649/722] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:10.920 [650/722] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:10.920 [651/722] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:10.920 [652/722] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:10.920 [653/722] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:11.179 [654/722] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:11.179 [655/722] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:11.179 [656/722] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:11.179 [657/722] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:11.179 [658/722] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:11.438 [659/722] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:11.438 [660/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:11.438 [661/722] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:11.697 [662/722] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:11.697 [663/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:11.697 [664/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:11.957 [665/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:11.957 [666/722] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:11.957 [667/722] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:11.957 [668/722] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:11.957 [669/722] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:01:11.957 [670/722] Linking static target drivers/librte_net_i40e.a 00:01:12.217 [671/722] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:12.217 [672/722] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:12.217 [673/722] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:12.475 [674/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:12.475 [675/722] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:12.733 [676/722] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.733 [677/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:12.733 [678/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:12.733 [679/722] Linking target drivers/librte_net_i40e.so.24.2 00:01:12.990 [680/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:12.990 [681/722] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:13.248 [682/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:13.507 [683/722] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:13.764 [684/722] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:14.330 [685/722] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:14.587 [686/722] Linking static target lib/librte_node.a 00:01:14.844 [687/722] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.844 [688/722] Linking target lib/librte_node.so.24.2 00:01:15.102 [689/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:15.668 [690/722] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:15.668 [691/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:17.563 [692/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:17.821 [693/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:24.437 [694/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:56.546 [695/722] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:56.546 [696/722] Linking static target lib/librte_vhost.a 00:01:56.546 [697/722] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.546 [698/722] Linking target lib/librte_vhost.so.24.2 00:02:06.528 [699/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:06.528 [700/722] Linking static target lib/librte_pipeline.a 00:02:07.095 [701/722] Linking target app/dpdk-test-acl 00:02:07.095 [702/722] Linking target app/dpdk-proc-info 00:02:07.095 [703/722] Linking target app/dpdk-dumpcap 00:02:07.095 [704/722] Linking target app/dpdk-test-sad 00:02:07.095 [705/722] Linking target app/dpdk-pdump 00:02:07.095 [706/722] Linking target app/dpdk-test-cmdline 00:02:07.095 [707/722] Linking target app/dpdk-test-pipeline 00:02:07.095 [708/722] Linking target app/dpdk-test-fib 00:02:07.095 [709/722] Linking target app/dpdk-test-gpudev 00:02:07.095 [710/722] Linking target app/dpdk-test-dma-perf 00:02:07.095 [711/722] Linking target app/dpdk-test-flow-perf 00:02:07.095 [712/722] Linking target app/dpdk-test-regex 00:02:07.095 [713/722] Linking target app/dpdk-test-security-perf 00:02:07.095 [714/722] Linking target app/dpdk-test-crypto-perf 00:02:07.095 [715/722] Linking target app/dpdk-test-mldev 00:02:07.095 [716/722] Linking target app/dpdk-graph 00:02:07.095 [717/722] Linking target app/dpdk-test-bbdev 00:02:07.095 [718/722] Linking target app/dpdk-test-compress-perf 00:02:07.095 [719/722] Linking target app/dpdk-test-eventdev 00:02:07.095 [720/722] Linking target app/dpdk-testpmd 00:02:08.994 [721/722] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.254 [722/722] Linking target lib/librte_pipeline.so.24.2 00:02:09.254 14:42:54 -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:09.254 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:09.254 [0/1] Installing files. 00:02:09.516 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:09.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:09.519 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.520 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.521 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:09.522 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:09.522 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:09.781 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.351 Installing lib/librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.351 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.351 Installing lib/librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.351 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.351 Installing lib/librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.351 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.351 Installing lib/librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.351 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.351 Installing lib/librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.351 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.351 Installing lib/librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing lib/librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing lib/librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing lib/librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing lib/librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing lib/librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing lib/librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing lib/librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing lib/librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing lib/librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing drivers/librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:10.352 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing drivers/librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:10.352 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing drivers/librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:10.352 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.352 Installing drivers/librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:10.352 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:10.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:10.357 Installing symlink pointing to librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:10.357 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:10.357 Installing symlink pointing to librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:10.357 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:10.357 Installing symlink pointing to librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.24 00:02:10.357 Installing symlink pointing to librte_argparse.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:02:10.357 Installing symlink pointing to librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:10.357 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:10.357 Installing symlink pointing to librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:10.357 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:10.357 Installing symlink pointing to librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:10.357 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:10.358 Installing symlink pointing to librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:10.358 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:10.358 Installing symlink pointing to librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:10.358 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:10.358 Installing symlink pointing to librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:10.358 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:10.358 Installing symlink pointing to librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:10.358 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:10.358 Installing symlink pointing to librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:10.358 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:10.358 Installing symlink pointing to librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:10.358 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:10.358 Installing symlink pointing to librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:10.358 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:10.358 Installing symlink pointing to librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:10.358 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:10.358 Installing symlink pointing to librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:10.358 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:10.358 Installing symlink pointing to librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:10.358 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:10.358 Installing symlink pointing to librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:10.358 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:10.358 Installing symlink pointing to librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:10.358 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:10.358 Installing symlink pointing to librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:10.358 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:10.358 Installing symlink pointing to librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:10.358 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:10.358 Installing symlink pointing to librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:10.358 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:10.358 Installing symlink pointing to librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:10.358 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:10.358 Installing symlink pointing to librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:10.358 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:10.358 Installing symlink pointing to librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:10.358 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:10.358 Installing symlink pointing to librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:10.358 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:10.358 Installing symlink pointing to librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:10.358 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:10.358 Installing symlink pointing to librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:10.358 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:10.358 Installing symlink pointing to librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:10.358 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:10.358 Installing symlink pointing to librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:10.358 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:10.358 Installing symlink pointing to librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:10.358 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:10.358 Installing symlink pointing to librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:10.358 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:10.358 Installing symlink pointing to librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:10.358 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:10.358 Installing symlink pointing to librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:10.358 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:10.358 Installing symlink pointing to librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:10.358 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:10.358 Installing symlink pointing to librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:10.358 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:10.358 Installing symlink pointing to librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:10.358 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:10.358 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:02:10.358 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:02:10.358 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:02:10.359 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:02:10.359 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:02:10.359 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:02:10.359 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:02:10.359 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:02:10.359 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:02:10.359 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:02:10.359 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:02:10.359 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:02:10.359 Installing symlink pointing to librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:10.359 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:10.359 Installing symlink pointing to librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:10.359 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:10.359 Installing symlink pointing to librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:10.359 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:10.359 Installing symlink pointing to librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:10.359 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:10.359 Installing symlink pointing to librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:10.359 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:10.359 Installing symlink pointing to librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:10.359 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:10.359 Installing symlink pointing to librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:10.359 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:10.359 Installing symlink pointing to librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:10.359 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:10.359 Installing symlink pointing to librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:10.359 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:10.359 Installing symlink pointing to librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:10.359 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:10.359 Installing symlink pointing to librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:10.359 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:10.359 Installing symlink pointing to librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:10.359 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:10.359 Installing symlink pointing to librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:10.359 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:10.359 Installing symlink pointing to librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:10.359 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:10.359 Installing symlink pointing to librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:10.359 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:10.359 Installing symlink pointing to librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:10.359 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:10.359 Installing symlink pointing to librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:10.359 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:10.359 Installing symlink pointing to librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:10.359 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:10.359 Installing symlink pointing to librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:10.359 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:10.359 Installing symlink pointing to librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:10.359 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:10.359 Installing symlink pointing to librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:10.359 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:10.359 Installing symlink pointing to librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:02:10.359 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:02:10.359 Installing symlink pointing to librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:02:10.359 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:02:10.359 Installing symlink pointing to librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:02:10.359 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:02:10.359 Installing symlink pointing to librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:02:10.359 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:02:10.359 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:02:10.359 14:42:55 -- common/autobuild_common.sh@189 -- $ uname -s 00:02:10.359 14:42:55 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:10.359 14:42:55 -- common/autobuild_common.sh@200 -- $ cat 00:02:10.359 14:42:55 -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.359 00:02:10.359 real 1m27.154s 00:02:10.359 user 18m31.324s 00:02:10.359 sys 2m11.332s 00:02:10.360 14:42:55 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:10.360 14:42:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.360 ************************************ 00:02:10.360 END TEST build_native_dpdk 00:02:10.360 ************************************ 00:02:10.360 14:42:55 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:10.360 14:42:55 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:10.360 14:42:55 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:10.360 14:42:55 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:10.360 14:42:55 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:10.360 14:42:55 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:10.360 14:42:55 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:10.360 14:42:55 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:10.360 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:10.634 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:10.634 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:10.634 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:10.897 Using 'verbs' RDMA provider 00:02:21.432 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:29.575 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:29.575 Creating mk/config.mk...done. 00:02:29.575 Creating mk/cc.flags.mk...done. 00:02:29.575 Type 'make' to build. 00:02:29.575 14:43:15 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:29.575 14:43:15 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:29.575 14:43:15 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:29.575 14:43:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.575 ************************************ 00:02:29.575 START TEST make 00:02:29.575 ************************************ 00:02:29.575 14:43:15 -- common/autotest_common.sh@1111 -- $ make -j48 00:02:29.834 make[1]: Nothing to be done for 'all'. 00:02:31.764 The Meson build system 00:02:31.764 Version: 1.3.1 00:02:31.764 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:31.764 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:31.764 Build type: native build 00:02:31.764 Project name: libvfio-user 00:02:31.764 Project version: 0.0.1 00:02:31.764 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:31.764 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:31.764 Host machine cpu family: x86_64 00:02:31.764 Host machine cpu: x86_64 00:02:31.764 Run-time dependency threads found: YES 00:02:31.764 Library dl found: YES 00:02:31.764 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:31.764 Run-time dependency json-c found: YES 0.17 00:02:31.764 Run-time dependency cmocka found: YES 1.1.7 00:02:31.764 Program pytest-3 found: NO 00:02:31.764 Program flake8 found: NO 00:02:31.764 Program misspell-fixer found: NO 00:02:31.764 Program restructuredtext-lint found: NO 00:02:31.764 Program valgrind found: YES (/usr/bin/valgrind) 00:02:31.764 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:31.764 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:31.764 Compiler for C supports arguments -Wwrite-strings: YES 00:02:31.764 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:31.764 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:31.764 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:31.764 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:31.764 Build targets in project: 8 00:02:31.764 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:31.764 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:31.764 00:02:31.764 libvfio-user 0.0.1 00:02:31.764 00:02:31.764 User defined options 00:02:31.764 buildtype : debug 00:02:31.764 default_library: shared 00:02:31.764 libdir : /usr/local/lib 00:02:31.764 00:02:31.764 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:32.385 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:32.385 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:32.385 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:32.385 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:32.648 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:32.648 [5/37] Compiling C object samples/null.p/null.c.o 00:02:32.648 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:32.648 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:32.648 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:32.648 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:32.648 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:32.648 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:32.648 [12/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:32.648 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:32.648 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:32.648 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:32.648 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:32.648 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:32.648 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:32.648 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:32.648 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:32.648 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:32.648 [22/37] Compiling C object samples/server.p/server.c.o 00:02:32.648 [23/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:32.648 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:32.648 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:32.648 [26/37] Compiling C object samples/client.p/client.c.o 00:02:32.648 [27/37] Linking target samples/client 00:02:32.908 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:32.908 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:32.908 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:32.908 [31/37] Linking target test/unit_tests 00:02:33.171 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:33.171 [33/37] Linking target samples/gpio-pci-idio-16 00:02:33.171 [34/37] Linking target samples/server 00:02:33.171 [35/37] Linking target samples/lspci 00:02:33.171 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:33.171 [37/37] Linking target samples/null 00:02:33.171 INFO: autodetecting backend as ninja 00:02:33.171 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:33.171 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:34.114 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:34.114 ninja: no work to do. 00:02:46.383 CC lib/log/log.o 00:02:46.383 CC lib/log/log_flags.o 00:02:46.383 CC lib/log/log_deprecated.o 00:02:46.383 CC lib/ut/ut.o 00:02:46.383 CC lib/ut_mock/mock.o 00:02:46.383 LIB libspdk_ut_mock.a 00:02:46.383 LIB libspdk_log.a 00:02:46.383 SO libspdk_ut_mock.so.6.0 00:02:46.383 LIB libspdk_ut.a 00:02:46.383 SO libspdk_log.so.7.0 00:02:46.383 SO libspdk_ut.so.2.0 00:02:46.383 SYMLINK libspdk_ut_mock.so 00:02:46.383 SYMLINK libspdk_ut.so 00:02:46.383 SYMLINK libspdk_log.so 00:02:46.383 CC lib/ioat/ioat.o 00:02:46.383 CXX lib/trace_parser/trace.o 00:02:46.383 CC lib/util/base64.o 00:02:46.383 CC lib/dma/dma.o 00:02:46.383 CC lib/util/bit_array.o 00:02:46.383 CC lib/util/cpuset.o 00:02:46.383 CC lib/util/crc16.o 00:02:46.383 CC lib/util/crc32.o 00:02:46.383 CC lib/util/crc32c.o 00:02:46.383 CC lib/util/crc32_ieee.o 00:02:46.383 CC lib/util/crc64.o 00:02:46.383 CC lib/util/dif.o 00:02:46.383 CC lib/util/fd.o 00:02:46.383 CC lib/util/file.o 00:02:46.383 CC lib/util/hexlify.o 00:02:46.383 CC lib/util/iov.o 00:02:46.383 CC lib/util/math.o 00:02:46.383 CC lib/util/pipe.o 00:02:46.383 CC lib/util/strerror_tls.o 00:02:46.383 CC lib/util/string.o 00:02:46.383 CC lib/util/uuid.o 00:02:46.383 CC lib/util/fd_group.o 00:02:46.383 CC lib/util/xor.o 00:02:46.383 CC lib/util/zipf.o 00:02:46.383 CC lib/vfio_user/host/vfio_user_pci.o 00:02:46.383 CC lib/vfio_user/host/vfio_user.o 00:02:46.383 LIB libspdk_dma.a 00:02:46.383 SO libspdk_dma.so.4.0 00:02:46.383 SYMLINK libspdk_dma.so 00:02:46.383 LIB libspdk_ioat.a 00:02:46.383 SO libspdk_ioat.so.7.0 00:02:46.383 SYMLINK libspdk_ioat.so 00:02:46.383 LIB libspdk_vfio_user.a 00:02:46.383 SO libspdk_vfio_user.so.5.0 00:02:46.383 SYMLINK libspdk_vfio_user.so 00:02:46.383 LIB libspdk_util.a 00:02:46.383 SO libspdk_util.so.9.0 00:02:46.383 SYMLINK libspdk_util.so 00:02:46.640 CC lib/idxd/idxd.o 00:02:46.640 CC lib/env_dpdk/env.o 00:02:46.640 CC lib/vmd/vmd.o 00:02:46.640 CC lib/rdma/common.o 00:02:46.640 CC lib/conf/conf.o 00:02:46.640 CC lib/idxd/idxd_user.o 00:02:46.640 CC lib/rdma/rdma_verbs.o 00:02:46.640 CC lib/json/json_parse.o 00:02:46.640 CC lib/env_dpdk/memory.o 00:02:46.640 CC lib/vmd/led.o 00:02:46.640 CC lib/env_dpdk/pci.o 00:02:46.640 CC lib/json/json_util.o 00:02:46.640 CC lib/env_dpdk/init.o 00:02:46.640 CC lib/json/json_write.o 00:02:46.640 CC lib/env_dpdk/threads.o 00:02:46.640 CC lib/env_dpdk/pci_ioat.o 00:02:46.640 CC lib/env_dpdk/pci_virtio.o 00:02:46.640 CC lib/env_dpdk/pci_vmd.o 00:02:46.640 CC lib/env_dpdk/pci_idxd.o 00:02:46.640 CC lib/env_dpdk/pci_event.o 00:02:46.640 CC lib/env_dpdk/sigbus_handler.o 00:02:46.640 CC lib/env_dpdk/pci_dpdk.o 00:02:46.640 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:46.640 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:46.640 LIB libspdk_trace_parser.a 00:02:46.640 SO libspdk_trace_parser.so.5.0 00:02:46.898 LIB libspdk_conf.a 00:02:46.898 SYMLINK libspdk_trace_parser.so 00:02:46.898 SO libspdk_conf.so.6.0 00:02:46.898 LIB libspdk_json.a 00:02:46.898 LIB libspdk_rdma.a 00:02:46.898 SYMLINK libspdk_conf.so 00:02:46.898 SO libspdk_rdma.so.6.0 00:02:46.898 SO libspdk_json.so.6.0 00:02:46.898 SYMLINK libspdk_rdma.so 00:02:46.898 SYMLINK libspdk_json.so 00:02:47.155 CC lib/jsonrpc/jsonrpc_server.o 00:02:47.155 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:47.155 CC lib/jsonrpc/jsonrpc_client.o 00:02:47.155 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:47.155 LIB libspdk_idxd.a 00:02:47.155 SO libspdk_idxd.so.12.0 00:02:47.155 SYMLINK libspdk_idxd.so 00:02:47.155 LIB libspdk_vmd.a 00:02:47.155 SO libspdk_vmd.so.6.0 00:02:47.413 SYMLINK libspdk_vmd.so 00:02:47.413 LIB libspdk_jsonrpc.a 00:02:47.413 SO libspdk_jsonrpc.so.6.0 00:02:47.413 SYMLINK libspdk_jsonrpc.so 00:02:47.671 CC lib/rpc/rpc.o 00:02:47.928 LIB libspdk_rpc.a 00:02:47.928 SO libspdk_rpc.so.6.0 00:02:47.928 SYMLINK libspdk_rpc.so 00:02:48.187 CC lib/notify/notify.o 00:02:48.187 CC lib/keyring/keyring.o 00:02:48.187 CC lib/trace/trace.o 00:02:48.187 CC lib/notify/notify_rpc.o 00:02:48.187 CC lib/keyring/keyring_rpc.o 00:02:48.187 CC lib/trace/trace_flags.o 00:02:48.187 CC lib/trace/trace_rpc.o 00:02:48.187 LIB libspdk_notify.a 00:02:48.187 SO libspdk_notify.so.6.0 00:02:48.187 LIB libspdk_keyring.a 00:02:48.445 LIB libspdk_trace.a 00:02:48.445 SYMLINK libspdk_notify.so 00:02:48.445 SO libspdk_keyring.so.1.0 00:02:48.445 SO libspdk_trace.so.10.0 00:02:48.445 SYMLINK libspdk_keyring.so 00:02:48.445 SYMLINK libspdk_trace.so 00:02:48.445 CC lib/thread/thread.o 00:02:48.445 CC lib/thread/iobuf.o 00:02:48.445 CC lib/sock/sock.o 00:02:48.445 CC lib/sock/sock_rpc.o 00:02:48.704 LIB libspdk_env_dpdk.a 00:02:48.704 SO libspdk_env_dpdk.so.14.0 00:02:48.704 SYMLINK libspdk_env_dpdk.so 00:02:48.970 LIB libspdk_sock.a 00:02:48.970 SO libspdk_sock.so.9.0 00:02:48.970 SYMLINK libspdk_sock.so 00:02:49.227 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:49.227 CC lib/nvme/nvme_ctrlr.o 00:02:49.227 CC lib/nvme/nvme_fabric.o 00:02:49.227 CC lib/nvme/nvme_ns_cmd.o 00:02:49.227 CC lib/nvme/nvme_ns.o 00:02:49.227 CC lib/nvme/nvme_pcie_common.o 00:02:49.227 CC lib/nvme/nvme_pcie.o 00:02:49.227 CC lib/nvme/nvme_qpair.o 00:02:49.227 CC lib/nvme/nvme.o 00:02:49.227 CC lib/nvme/nvme_quirks.o 00:02:49.227 CC lib/nvme/nvme_transport.o 00:02:49.227 CC lib/nvme/nvme_discovery.o 00:02:49.227 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:49.227 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:49.227 CC lib/nvme/nvme_tcp.o 00:02:49.227 CC lib/nvme/nvme_opal.o 00:02:49.227 CC lib/nvme/nvme_io_msg.o 00:02:49.227 CC lib/nvme/nvme_poll_group.o 00:02:49.227 CC lib/nvme/nvme_zns.o 00:02:49.227 CC lib/nvme/nvme_stubs.o 00:02:49.227 CC lib/nvme/nvme_auth.o 00:02:49.227 CC lib/nvme/nvme_cuse.o 00:02:49.227 CC lib/nvme/nvme_vfio_user.o 00:02:49.227 CC lib/nvme/nvme_rdma.o 00:02:50.162 LIB libspdk_thread.a 00:02:50.162 SO libspdk_thread.so.10.0 00:02:50.162 SYMLINK libspdk_thread.so 00:02:50.420 CC lib/init/json_config.o 00:02:50.420 CC lib/init/subsystem.o 00:02:50.420 CC lib/init/subsystem_rpc.o 00:02:50.420 CC lib/init/rpc.o 00:02:50.420 CC lib/accel/accel.o 00:02:50.420 CC lib/blob/blobstore.o 00:02:50.420 CC lib/virtio/virtio.o 00:02:50.420 CC lib/vfu_tgt/tgt_endpoint.o 00:02:50.420 CC lib/vfu_tgt/tgt_rpc.o 00:02:50.420 CC lib/accel/accel_rpc.o 00:02:50.420 CC lib/blob/request.o 00:02:50.420 CC lib/virtio/virtio_vhost_user.o 00:02:50.420 CC lib/blob/zeroes.o 00:02:50.420 CC lib/accel/accel_sw.o 00:02:50.420 CC lib/virtio/virtio_vfio_user.o 00:02:50.420 CC lib/blob/blob_bs_dev.o 00:02:50.420 CC lib/virtio/virtio_pci.o 00:02:50.678 LIB libspdk_init.a 00:02:50.679 SO libspdk_init.so.5.0 00:02:50.679 LIB libspdk_virtio.a 00:02:50.679 LIB libspdk_vfu_tgt.a 00:02:50.679 SYMLINK libspdk_init.so 00:02:50.679 SO libspdk_vfu_tgt.so.3.0 00:02:50.679 SO libspdk_virtio.so.7.0 00:02:50.937 SYMLINK libspdk_vfu_tgt.so 00:02:50.937 SYMLINK libspdk_virtio.so 00:02:50.937 CC lib/event/app.o 00:02:50.937 CC lib/event/reactor.o 00:02:50.937 CC lib/event/log_rpc.o 00:02:50.937 CC lib/event/app_rpc.o 00:02:50.937 CC lib/event/scheduler_static.o 00:02:51.504 LIB libspdk_event.a 00:02:51.504 SO libspdk_event.so.13.0 00:02:51.504 SYMLINK libspdk_event.so 00:02:51.504 LIB libspdk_accel.a 00:02:51.504 SO libspdk_accel.so.15.0 00:02:51.504 SYMLINK libspdk_accel.so 00:02:51.504 LIB libspdk_nvme.a 00:02:51.762 SO libspdk_nvme.so.13.0 00:02:51.763 CC lib/bdev/bdev.o 00:02:51.763 CC lib/bdev/bdev_rpc.o 00:02:51.763 CC lib/bdev/bdev_zone.o 00:02:51.763 CC lib/bdev/part.o 00:02:51.763 CC lib/bdev/scsi_nvme.o 00:02:52.020 SYMLINK libspdk_nvme.so 00:02:53.394 LIB libspdk_blob.a 00:02:53.394 SO libspdk_blob.so.11.0 00:02:53.394 SYMLINK libspdk_blob.so 00:02:53.652 CC lib/blobfs/blobfs.o 00:02:53.652 CC lib/blobfs/tree.o 00:02:53.652 CC lib/lvol/lvol.o 00:02:54.218 LIB libspdk_bdev.a 00:02:54.218 SO libspdk_bdev.so.15.0 00:02:54.218 SYMLINK libspdk_bdev.so 00:02:54.218 LIB libspdk_blobfs.a 00:02:54.486 SO libspdk_blobfs.so.10.0 00:02:54.486 SYMLINK libspdk_blobfs.so 00:02:54.486 LIB libspdk_lvol.a 00:02:54.486 SO libspdk_lvol.so.10.0 00:02:54.486 CC lib/nbd/nbd.o 00:02:54.486 CC lib/ublk/ublk.o 00:02:54.486 CC lib/scsi/dev.o 00:02:54.486 CC lib/nvmf/ctrlr.o 00:02:54.486 CC lib/ublk/ublk_rpc.o 00:02:54.486 CC lib/nbd/nbd_rpc.o 00:02:54.486 CC lib/scsi/lun.o 00:02:54.486 CC lib/ftl/ftl_core.o 00:02:54.486 CC lib/nvmf/ctrlr_discovery.o 00:02:54.486 CC lib/ftl/ftl_init.o 00:02:54.486 CC lib/scsi/port.o 00:02:54.486 CC lib/nvmf/ctrlr_bdev.o 00:02:54.486 CC lib/scsi/scsi.o 00:02:54.486 CC lib/nvmf/subsystem.o 00:02:54.486 CC lib/ftl/ftl_layout.o 00:02:54.486 CC lib/scsi/scsi_bdev.o 00:02:54.486 CC lib/nvmf/nvmf.o 00:02:54.486 CC lib/ftl/ftl_debug.o 00:02:54.486 CC lib/ftl/ftl_io.o 00:02:54.486 CC lib/scsi/scsi_rpc.o 00:02:54.486 CC lib/nvmf/nvmf_rpc.o 00:02:54.486 CC lib/scsi/scsi_pr.o 00:02:54.486 CC lib/nvmf/transport.o 00:02:54.486 CC lib/ftl/ftl_sb.o 00:02:54.486 CC lib/nvmf/tcp.o 00:02:54.486 CC lib/scsi/task.o 00:02:54.486 CC lib/nvmf/vfio_user.o 00:02:54.486 CC lib/ftl/ftl_l2p.o 00:02:54.486 CC lib/ftl/ftl_l2p_flat.o 00:02:54.486 CC lib/ftl/ftl_nv_cache.o 00:02:54.486 CC lib/nvmf/rdma.o 00:02:54.486 CC lib/ftl/ftl_band.o 00:02:54.486 CC lib/ftl/ftl_band_ops.o 00:02:54.486 CC lib/ftl/ftl_writer.o 00:02:54.486 CC lib/ftl/ftl_reloc.o 00:02:54.486 CC lib/ftl/ftl_rq.o 00:02:54.486 CC lib/ftl/ftl_l2p_cache.o 00:02:54.486 CC lib/ftl/ftl_p2l.o 00:02:54.486 CC lib/ftl/mngt/ftl_mngt.o 00:02:54.486 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:54.486 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:54.486 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:54.486 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:54.486 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:54.486 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:54.486 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:54.486 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:54.486 SYMLINK libspdk_lvol.so 00:02:54.486 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:54.744 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:54.744 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:54.744 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:55.007 CC lib/ftl/utils/ftl_conf.o 00:02:55.007 CC lib/ftl/utils/ftl_md.o 00:02:55.007 CC lib/ftl/utils/ftl_mempool.o 00:02:55.007 CC lib/ftl/utils/ftl_bitmap.o 00:02:55.007 CC lib/ftl/utils/ftl_property.o 00:02:55.007 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:55.007 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:55.007 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:55.007 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:55.007 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:55.007 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:55.007 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:55.007 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:55.007 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:55.007 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:55.007 CC lib/ftl/base/ftl_base_dev.o 00:02:55.007 CC lib/ftl/base/ftl_base_bdev.o 00:02:55.007 CC lib/ftl/ftl_trace.o 00:02:55.268 LIB libspdk_nbd.a 00:02:55.268 SO libspdk_nbd.so.7.0 00:02:55.268 SYMLINK libspdk_nbd.so 00:02:55.526 LIB libspdk_scsi.a 00:02:55.526 SO libspdk_scsi.so.9.0 00:02:55.526 LIB libspdk_ublk.a 00:02:55.526 SYMLINK libspdk_scsi.so 00:02:55.526 SO libspdk_ublk.so.3.0 00:02:55.526 SYMLINK libspdk_ublk.so 00:02:55.785 CC lib/iscsi/conn.o 00:02:55.785 CC lib/iscsi/init_grp.o 00:02:55.785 CC lib/iscsi/iscsi.o 00:02:55.785 CC lib/iscsi/md5.o 00:02:55.785 CC lib/iscsi/param.o 00:02:55.785 CC lib/vhost/vhost.o 00:02:55.785 CC lib/iscsi/portal_grp.o 00:02:55.785 CC lib/iscsi/tgt_node.o 00:02:55.785 CC lib/iscsi/iscsi_subsystem.o 00:02:55.785 CC lib/vhost/vhost_rpc.o 00:02:55.785 CC lib/iscsi/iscsi_rpc.o 00:02:55.785 CC lib/vhost/vhost_scsi.o 00:02:55.785 CC lib/iscsi/task.o 00:02:55.785 CC lib/vhost/vhost_blk.o 00:02:55.785 CC lib/vhost/rte_vhost_user.o 00:02:55.785 LIB libspdk_ftl.a 00:02:56.044 SO libspdk_ftl.so.9.0 00:02:56.302 SYMLINK libspdk_ftl.so 00:02:56.867 LIB libspdk_vhost.a 00:02:56.867 SO libspdk_vhost.so.8.0 00:02:57.126 LIB libspdk_nvmf.a 00:02:57.126 SYMLINK libspdk_vhost.so 00:02:57.126 SO libspdk_nvmf.so.18.0 00:02:57.126 LIB libspdk_iscsi.a 00:02:57.126 SO libspdk_iscsi.so.8.0 00:02:57.385 SYMLINK libspdk_nvmf.so 00:02:57.385 SYMLINK libspdk_iscsi.so 00:02:57.643 CC module/vfu_device/vfu_virtio.o 00:02:57.643 CC module/env_dpdk/env_dpdk_rpc.o 00:02:57.643 CC module/vfu_device/vfu_virtio_blk.o 00:02:57.643 CC module/vfu_device/vfu_virtio_scsi.o 00:02:57.643 CC module/vfu_device/vfu_virtio_rpc.o 00:02:57.643 CC module/blob/bdev/blob_bdev.o 00:02:57.643 CC module/accel/iaa/accel_iaa.o 00:02:57.643 CC module/accel/ioat/accel_ioat.o 00:02:57.643 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:57.643 CC module/accel/ioat/accel_ioat_rpc.o 00:02:57.643 CC module/accel/iaa/accel_iaa_rpc.o 00:02:57.643 CC module/sock/posix/posix.o 00:02:57.643 CC module/keyring/file/keyring.o 00:02:57.643 CC module/accel/dsa/accel_dsa.o 00:02:57.643 CC module/accel/dsa/accel_dsa_rpc.o 00:02:57.643 CC module/keyring/file/keyring_rpc.o 00:02:57.643 CC module/scheduler/gscheduler/gscheduler.o 00:02:57.643 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:57.643 CC module/accel/error/accel_error.o 00:02:57.643 CC module/accel/error/accel_error_rpc.o 00:02:57.643 LIB libspdk_env_dpdk_rpc.a 00:02:57.900 SO libspdk_env_dpdk_rpc.so.6.0 00:02:57.900 SYMLINK libspdk_env_dpdk_rpc.so 00:02:57.900 LIB libspdk_keyring_file.a 00:02:57.900 LIB libspdk_scheduler_dpdk_governor.a 00:02:57.900 LIB libspdk_scheduler_gscheduler.a 00:02:57.900 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:57.900 SO libspdk_keyring_file.so.1.0 00:02:57.900 SO libspdk_scheduler_gscheduler.so.4.0 00:02:57.900 LIB libspdk_accel_error.a 00:02:57.900 LIB libspdk_scheduler_dynamic.a 00:02:57.900 LIB libspdk_accel_ioat.a 00:02:57.900 LIB libspdk_accel_iaa.a 00:02:57.900 SO libspdk_accel_error.so.2.0 00:02:57.900 SO libspdk_scheduler_dynamic.so.4.0 00:02:57.900 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:57.901 SO libspdk_accel_ioat.so.6.0 00:02:57.901 SYMLINK libspdk_scheduler_gscheduler.so 00:02:57.901 SYMLINK libspdk_keyring_file.so 00:02:57.901 SO libspdk_accel_iaa.so.3.0 00:02:57.901 LIB libspdk_accel_dsa.a 00:02:57.901 SYMLINK libspdk_scheduler_dynamic.so 00:02:57.901 SYMLINK libspdk_accel_error.so 00:02:57.901 SO libspdk_accel_dsa.so.5.0 00:02:57.901 LIB libspdk_blob_bdev.a 00:02:57.901 SYMLINK libspdk_accel_ioat.so 00:02:57.901 SYMLINK libspdk_accel_iaa.so 00:02:57.901 SO libspdk_blob_bdev.so.11.0 00:02:58.185 SYMLINK libspdk_accel_dsa.so 00:02:58.185 SYMLINK libspdk_blob_bdev.so 00:02:58.185 LIB libspdk_vfu_device.a 00:02:58.452 SO libspdk_vfu_device.so.3.0 00:02:58.452 CC module/bdev/delay/vbdev_delay.o 00:02:58.452 CC module/bdev/gpt/gpt.o 00:02:58.452 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:58.452 CC module/bdev/gpt/vbdev_gpt.o 00:02:58.452 CC module/bdev/null/bdev_null.o 00:02:58.452 CC module/blobfs/bdev/blobfs_bdev.o 00:02:58.452 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:58.452 CC module/bdev/null/bdev_null_rpc.o 00:02:58.452 CC module/bdev/passthru/vbdev_passthru.o 00:02:58.452 CC module/bdev/aio/bdev_aio.o 00:02:58.452 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:58.452 CC module/bdev/nvme/bdev_nvme.o 00:02:58.452 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:58.452 CC module/bdev/malloc/bdev_malloc.o 00:02:58.452 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:58.452 CC module/bdev/ftl/bdev_ftl.o 00:02:58.452 CC module/bdev/error/vbdev_error.o 00:02:58.452 CC module/bdev/aio/bdev_aio_rpc.o 00:02:58.452 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:58.452 CC module/bdev/lvol/vbdev_lvol.o 00:02:58.452 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:58.452 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:58.452 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:58.452 CC module/bdev/error/vbdev_error_rpc.o 00:02:58.452 CC module/bdev/nvme/nvme_rpc.o 00:02:58.452 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:58.452 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:58.452 CC module/bdev/iscsi/bdev_iscsi.o 00:02:58.452 CC module/bdev/nvme/bdev_mdns_client.o 00:02:58.452 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:58.452 CC module/bdev/nvme/vbdev_opal.o 00:02:58.452 CC module/bdev/split/vbdev_split.o 00:02:58.452 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:58.452 CC module/bdev/raid/bdev_raid.o 00:02:58.452 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:58.452 CC module/bdev/split/vbdev_split_rpc.o 00:02:58.452 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:58.452 CC module/bdev/raid/bdev_raid_rpc.o 00:02:58.452 CC module/bdev/raid/bdev_raid_sb.o 00:02:58.452 CC module/bdev/raid/raid0.o 00:02:58.452 CC module/bdev/raid/raid1.o 00:02:58.452 CC module/bdev/raid/concat.o 00:02:58.452 SYMLINK libspdk_vfu_device.so 00:02:58.452 LIB libspdk_sock_posix.a 00:02:58.452 SO libspdk_sock_posix.so.6.0 00:02:58.711 SYMLINK libspdk_sock_posix.so 00:02:58.711 LIB libspdk_blobfs_bdev.a 00:02:58.711 SO libspdk_blobfs_bdev.so.6.0 00:02:58.711 LIB libspdk_bdev_split.a 00:02:58.711 SO libspdk_bdev_split.so.6.0 00:02:58.711 LIB libspdk_bdev_aio.a 00:02:58.711 SYMLINK libspdk_blobfs_bdev.so 00:02:58.711 LIB libspdk_bdev_gpt.a 00:02:58.711 LIB libspdk_bdev_null.a 00:02:58.711 LIB libspdk_bdev_error.a 00:02:58.970 SO libspdk_bdev_gpt.so.6.0 00:02:58.970 SO libspdk_bdev_aio.so.6.0 00:02:58.970 LIB libspdk_bdev_ftl.a 00:02:58.970 SO libspdk_bdev_null.so.6.0 00:02:58.970 SYMLINK libspdk_bdev_split.so 00:02:58.970 SO libspdk_bdev_error.so.6.0 00:02:58.970 LIB libspdk_bdev_passthru.a 00:02:58.970 SO libspdk_bdev_ftl.so.6.0 00:02:58.970 SYMLINK libspdk_bdev_gpt.so 00:02:58.970 SYMLINK libspdk_bdev_aio.so 00:02:58.970 SO libspdk_bdev_passthru.so.6.0 00:02:58.970 SYMLINK libspdk_bdev_null.so 00:02:58.970 SYMLINK libspdk_bdev_error.so 00:02:58.970 LIB libspdk_bdev_zone_block.a 00:02:58.970 LIB libspdk_bdev_iscsi.a 00:02:58.970 LIB libspdk_bdev_malloc.a 00:02:58.970 SYMLINK libspdk_bdev_ftl.so 00:02:58.970 SO libspdk_bdev_zone_block.so.6.0 00:02:58.970 LIB libspdk_bdev_virtio.a 00:02:58.970 SO libspdk_bdev_malloc.so.6.0 00:02:58.970 LIB libspdk_bdev_delay.a 00:02:58.970 SO libspdk_bdev_iscsi.so.6.0 00:02:58.970 SYMLINK libspdk_bdev_passthru.so 00:02:58.970 SO libspdk_bdev_delay.so.6.0 00:02:58.970 SO libspdk_bdev_virtio.so.6.0 00:02:58.970 SYMLINK libspdk_bdev_zone_block.so 00:02:58.970 SYMLINK libspdk_bdev_malloc.so 00:02:58.970 SYMLINK libspdk_bdev_iscsi.so 00:02:58.970 SYMLINK libspdk_bdev_delay.so 00:02:58.970 LIB libspdk_bdev_lvol.a 00:02:58.970 SYMLINK libspdk_bdev_virtio.so 00:02:58.970 SO libspdk_bdev_lvol.so.6.0 00:02:59.227 SYMLINK libspdk_bdev_lvol.so 00:02:59.485 LIB libspdk_bdev_raid.a 00:02:59.485 SO libspdk_bdev_raid.so.6.0 00:02:59.485 SYMLINK libspdk_bdev_raid.so 00:03:00.859 LIB libspdk_bdev_nvme.a 00:03:00.859 SO libspdk_bdev_nvme.so.7.0 00:03:00.859 SYMLINK libspdk_bdev_nvme.so 00:03:01.116 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:01.116 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:01.116 CC module/event/subsystems/scheduler/scheduler.o 00:03:01.116 CC module/event/subsystems/vmd/vmd.o 00:03:01.117 CC module/event/subsystems/keyring/keyring.o 00:03:01.117 CC module/event/subsystems/sock/sock.o 00:03:01.117 CC module/event/subsystems/iobuf/iobuf.o 00:03:01.117 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:01.117 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:01.375 LIB libspdk_event_keyring.a 00:03:01.375 LIB libspdk_event_sock.a 00:03:01.375 LIB libspdk_event_scheduler.a 00:03:01.375 LIB libspdk_event_vhost_blk.a 00:03:01.375 LIB libspdk_event_vfu_tgt.a 00:03:01.375 LIB libspdk_event_vmd.a 00:03:01.375 SO libspdk_event_keyring.so.1.0 00:03:01.375 SO libspdk_event_sock.so.5.0 00:03:01.375 SO libspdk_event_scheduler.so.4.0 00:03:01.375 LIB libspdk_event_iobuf.a 00:03:01.375 SO libspdk_event_vhost_blk.so.3.0 00:03:01.375 SO libspdk_event_vfu_tgt.so.3.0 00:03:01.375 SO libspdk_event_vmd.so.6.0 00:03:01.375 SO libspdk_event_iobuf.so.3.0 00:03:01.375 SYMLINK libspdk_event_keyring.so 00:03:01.375 SYMLINK libspdk_event_sock.so 00:03:01.375 SYMLINK libspdk_event_scheduler.so 00:03:01.375 SYMLINK libspdk_event_vhost_blk.so 00:03:01.375 SYMLINK libspdk_event_vfu_tgt.so 00:03:01.375 SYMLINK libspdk_event_vmd.so 00:03:01.375 SYMLINK libspdk_event_iobuf.so 00:03:01.633 CC module/event/subsystems/accel/accel.o 00:03:01.633 LIB libspdk_event_accel.a 00:03:01.633 SO libspdk_event_accel.so.6.0 00:03:01.891 SYMLINK libspdk_event_accel.so 00:03:01.891 CC module/event/subsystems/bdev/bdev.o 00:03:02.149 LIB libspdk_event_bdev.a 00:03:02.149 SO libspdk_event_bdev.so.6.0 00:03:02.149 SYMLINK libspdk_event_bdev.so 00:03:02.406 CC module/event/subsystems/nbd/nbd.o 00:03:02.406 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:02.406 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:02.406 CC module/event/subsystems/ublk/ublk.o 00:03:02.406 CC module/event/subsystems/scsi/scsi.o 00:03:02.664 LIB libspdk_event_nbd.a 00:03:02.664 LIB libspdk_event_ublk.a 00:03:02.665 LIB libspdk_event_scsi.a 00:03:02.665 SO libspdk_event_nbd.so.6.0 00:03:02.665 SO libspdk_event_ublk.so.3.0 00:03:02.665 SO libspdk_event_scsi.so.6.0 00:03:02.665 SYMLINK libspdk_event_nbd.so 00:03:02.665 SYMLINK libspdk_event_ublk.so 00:03:02.665 LIB libspdk_event_nvmf.a 00:03:02.665 SYMLINK libspdk_event_scsi.so 00:03:02.665 SO libspdk_event_nvmf.so.6.0 00:03:02.665 SYMLINK libspdk_event_nvmf.so 00:03:02.923 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:02.923 CC module/event/subsystems/iscsi/iscsi.o 00:03:02.923 LIB libspdk_event_vhost_scsi.a 00:03:02.923 SO libspdk_event_vhost_scsi.so.3.0 00:03:02.923 LIB libspdk_event_iscsi.a 00:03:02.923 SO libspdk_event_iscsi.so.6.0 00:03:02.923 SYMLINK libspdk_event_vhost_scsi.so 00:03:03.182 SYMLINK libspdk_event_iscsi.so 00:03:03.182 SO libspdk.so.6.0 00:03:03.182 SYMLINK libspdk.so 00:03:03.447 CC app/spdk_top/spdk_top.o 00:03:03.447 CXX app/trace/trace.o 00:03:03.447 CC app/trace_record/trace_record.o 00:03:03.447 CC app/spdk_nvme_discover/discovery_aer.o 00:03:03.447 CC app/spdk_lspci/spdk_lspci.o 00:03:03.447 TEST_HEADER include/spdk/accel.h 00:03:03.447 CC app/spdk_nvme_identify/identify.o 00:03:03.447 TEST_HEADER include/spdk/accel_module.h 00:03:03.447 CC test/rpc_client/rpc_client_test.o 00:03:03.447 TEST_HEADER include/spdk/assert.h 00:03:03.447 CC app/spdk_nvme_perf/perf.o 00:03:03.447 TEST_HEADER include/spdk/barrier.h 00:03:03.447 TEST_HEADER include/spdk/base64.h 00:03:03.447 TEST_HEADER include/spdk/bdev.h 00:03:03.447 TEST_HEADER include/spdk/bdev_module.h 00:03:03.447 TEST_HEADER include/spdk/bdev_zone.h 00:03:03.447 TEST_HEADER include/spdk/bit_array.h 00:03:03.447 TEST_HEADER include/spdk/bit_pool.h 00:03:03.447 TEST_HEADER include/spdk/blob_bdev.h 00:03:03.447 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:03.447 TEST_HEADER include/spdk/blobfs.h 00:03:03.447 TEST_HEADER include/spdk/blob.h 00:03:03.447 TEST_HEADER include/spdk/conf.h 00:03:03.447 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:03.447 TEST_HEADER include/spdk/config.h 00:03:03.447 TEST_HEADER include/spdk/cpuset.h 00:03:03.447 TEST_HEADER include/spdk/crc16.h 00:03:03.447 CC app/spdk_dd/spdk_dd.o 00:03:03.447 TEST_HEADER include/spdk/crc32.h 00:03:03.447 TEST_HEADER include/spdk/crc64.h 00:03:03.447 TEST_HEADER include/spdk/dif.h 00:03:03.447 TEST_HEADER include/spdk/dma.h 00:03:03.447 TEST_HEADER include/spdk/endian.h 00:03:03.447 CC app/nvmf_tgt/nvmf_main.o 00:03:03.447 TEST_HEADER include/spdk/env_dpdk.h 00:03:03.447 CC app/iscsi_tgt/iscsi_tgt.o 00:03:03.447 TEST_HEADER include/spdk/env.h 00:03:03.447 CC app/vhost/vhost.o 00:03:03.447 TEST_HEADER include/spdk/event.h 00:03:03.447 TEST_HEADER include/spdk/fd_group.h 00:03:03.447 TEST_HEADER include/spdk/fd.h 00:03:03.447 TEST_HEADER include/spdk/file.h 00:03:03.447 TEST_HEADER include/spdk/ftl.h 00:03:03.447 TEST_HEADER include/spdk/gpt_spec.h 00:03:03.447 TEST_HEADER include/spdk/hexlify.h 00:03:03.447 TEST_HEADER include/spdk/histogram_data.h 00:03:03.447 TEST_HEADER include/spdk/idxd.h 00:03:03.447 TEST_HEADER include/spdk/idxd_spec.h 00:03:03.447 CC app/spdk_tgt/spdk_tgt.o 00:03:03.447 TEST_HEADER include/spdk/init.h 00:03:03.447 TEST_HEADER include/spdk/ioat.h 00:03:03.447 CC examples/sock/hello_world/hello_sock.o 00:03:03.447 TEST_HEADER include/spdk/ioat_spec.h 00:03:03.447 CC examples/util/zipf/zipf.o 00:03:03.447 CC examples/nvme/reconnect/reconnect.o 00:03:03.447 CC examples/ioat/verify/verify.o 00:03:03.447 TEST_HEADER include/spdk/iscsi_spec.h 00:03:03.447 CC examples/idxd/perf/perf.o 00:03:03.447 CC examples/accel/perf/accel_perf.o 00:03:03.447 CC test/thread/poller_perf/poller_perf.o 00:03:03.447 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:03.447 CC examples/ioat/perf/perf.o 00:03:03.447 CC examples/vmd/lsvmd/lsvmd.o 00:03:03.447 TEST_HEADER include/spdk/json.h 00:03:03.447 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:03.447 CC examples/nvme/hotplug/hotplug.o 00:03:03.447 CC test/event/event_perf/event_perf.o 00:03:03.447 CC examples/nvme/arbitration/arbitration.o 00:03:03.447 TEST_HEADER include/spdk/jsonrpc.h 00:03:03.447 TEST_HEADER include/spdk/keyring.h 00:03:03.447 CC test/event/reactor_perf/reactor_perf.o 00:03:03.447 TEST_HEADER include/spdk/keyring_module.h 00:03:03.447 CC examples/nvme/hello_world/hello_world.o 00:03:03.447 CC examples/vmd/led/led.o 00:03:03.447 CC test/event/reactor/reactor.o 00:03:03.447 TEST_HEADER include/spdk/likely.h 00:03:03.447 TEST_HEADER include/spdk/log.h 00:03:03.447 CC app/fio/nvme/fio_plugin.o 00:03:03.711 TEST_HEADER include/spdk/lvol.h 00:03:03.711 TEST_HEADER include/spdk/memory.h 00:03:03.711 CC test/nvme/aer/aer.o 00:03:03.711 TEST_HEADER include/spdk/mmio.h 00:03:03.711 TEST_HEADER include/spdk/nbd.h 00:03:03.711 TEST_HEADER include/spdk/notify.h 00:03:03.711 TEST_HEADER include/spdk/nvme.h 00:03:03.711 TEST_HEADER include/spdk/nvme_intel.h 00:03:03.711 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:03.711 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:03.711 TEST_HEADER include/spdk/nvme_spec.h 00:03:03.711 TEST_HEADER include/spdk/nvme_zns.h 00:03:03.711 CC test/blobfs/mkfs/mkfs.o 00:03:03.711 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:03.711 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:03.711 CC examples/bdev/hello_world/hello_bdev.o 00:03:03.711 CC test/bdev/bdevio/bdevio.o 00:03:03.711 TEST_HEADER include/spdk/nvmf.h 00:03:03.711 CC examples/thread/thread/thread_ex.o 00:03:03.711 CC examples/blob/hello_world/hello_blob.o 00:03:03.711 TEST_HEADER include/spdk/nvmf_spec.h 00:03:03.711 CC examples/bdev/bdevperf/bdevperf.o 00:03:03.711 CC examples/blob/cli/blobcli.o 00:03:03.711 CC test/dma/test_dma/test_dma.o 00:03:03.711 CC test/accel/dif/dif.o 00:03:03.711 TEST_HEADER include/spdk/nvmf_transport.h 00:03:03.711 TEST_HEADER include/spdk/opal.h 00:03:03.711 TEST_HEADER include/spdk/opal_spec.h 00:03:03.711 CC examples/nvmf/nvmf/nvmf.o 00:03:03.711 TEST_HEADER include/spdk/pci_ids.h 00:03:03.711 TEST_HEADER include/spdk/pipe.h 00:03:03.711 TEST_HEADER include/spdk/queue.h 00:03:03.711 TEST_HEADER include/spdk/reduce.h 00:03:03.711 CC test/app/bdev_svc/bdev_svc.o 00:03:03.711 TEST_HEADER include/spdk/rpc.h 00:03:03.711 TEST_HEADER include/spdk/scheduler.h 00:03:03.711 TEST_HEADER include/spdk/scsi.h 00:03:03.711 TEST_HEADER include/spdk/scsi_spec.h 00:03:03.711 TEST_HEADER include/spdk/sock.h 00:03:03.711 TEST_HEADER include/spdk/stdinc.h 00:03:03.711 TEST_HEADER include/spdk/string.h 00:03:03.711 TEST_HEADER include/spdk/thread.h 00:03:03.711 TEST_HEADER include/spdk/trace.h 00:03:03.711 TEST_HEADER include/spdk/trace_parser.h 00:03:03.711 TEST_HEADER include/spdk/tree.h 00:03:03.711 TEST_HEADER include/spdk/ublk.h 00:03:03.711 TEST_HEADER include/spdk/util.h 00:03:03.711 LINK spdk_lspci 00:03:03.711 TEST_HEADER include/spdk/uuid.h 00:03:03.711 TEST_HEADER include/spdk/version.h 00:03:03.711 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:03.711 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:03.711 CC test/lvol/esnap/esnap.o 00:03:03.711 TEST_HEADER include/spdk/vhost.h 00:03:03.711 TEST_HEADER include/spdk/vmd.h 00:03:03.711 TEST_HEADER include/spdk/xor.h 00:03:03.711 TEST_HEADER include/spdk/zipf.h 00:03:03.711 CXX test/cpp_headers/accel.o 00:03:03.711 CC test/env/mem_callbacks/mem_callbacks.o 00:03:03.711 LINK rpc_client_test 00:03:03.711 LINK spdk_nvme_discover 00:03:03.976 LINK interrupt_tgt 00:03:03.976 LINK lsvmd 00:03:03.976 LINK event_perf 00:03:03.976 LINK reactor 00:03:03.976 LINK reactor_perf 00:03:03.976 LINK zipf 00:03:03.976 LINK poller_perf 00:03:03.976 LINK nvmf_tgt 00:03:03.976 LINK led 00:03:03.976 LINK vhost 00:03:03.976 LINK spdk_trace_record 00:03:03.976 LINK iscsi_tgt 00:03:03.976 LINK cmb_copy 00:03:03.976 LINK verify 00:03:03.976 LINK ioat_perf 00:03:03.976 LINK spdk_tgt 00:03:03.976 LINK mkfs 00:03:03.976 LINK hello_world 00:03:03.976 LINK bdev_svc 00:03:03.976 LINK hotplug 00:03:03.976 LINK hello_sock 00:03:04.242 LINK hello_bdev 00:03:04.242 LINK hello_blob 00:03:04.242 CXX test/cpp_headers/accel_module.o 00:03:04.242 LINK thread 00:03:04.242 LINK aer 00:03:04.242 LINK spdk_dd 00:03:04.242 LINK idxd_perf 00:03:04.242 LINK arbitration 00:03:04.242 LINK nvmf 00:03:04.242 LINK reconnect 00:03:04.242 CXX test/cpp_headers/assert.o 00:03:04.242 CXX test/cpp_headers/barrier.o 00:03:04.242 CC test/event/app_repeat/app_repeat.o 00:03:04.242 LINK spdk_trace 00:03:04.242 CC examples/nvme/abort/abort.o 00:03:04.511 LINK dif 00:03:04.511 LINK bdevio 00:03:04.511 CC test/nvme/reset/reset.o 00:03:04.511 CC test/env/vtophys/vtophys.o 00:03:04.511 CXX test/cpp_headers/base64.o 00:03:04.511 LINK test_dma 00:03:04.511 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:04.511 LINK accel_perf 00:03:04.511 CC test/app/histogram_perf/histogram_perf.o 00:03:04.511 CC app/fio/bdev/fio_plugin.o 00:03:04.511 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:04.511 CC test/env/memory/memory_ut.o 00:03:04.511 CC test/nvme/sgl/sgl.o 00:03:04.511 LINK nvme_manage 00:03:04.511 CXX test/cpp_headers/bdev.o 00:03:04.511 CC test/env/pci/pci_ut.o 00:03:04.511 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:04.511 CC test/event/scheduler/scheduler.o 00:03:04.511 CC test/app/jsoncat/jsoncat.o 00:03:04.511 LINK blobcli 00:03:04.511 CXX test/cpp_headers/bdev_module.o 00:03:04.772 CXX test/cpp_headers/bdev_zone.o 00:03:04.772 LINK spdk_nvme 00:03:04.772 LINK app_repeat 00:03:04.772 CC test/app/stub/stub.o 00:03:04.772 CXX test/cpp_headers/bit_array.o 00:03:04.772 CXX test/cpp_headers/bit_pool.o 00:03:04.772 CC test/nvme/e2edp/nvme_dp.o 00:03:04.772 CC test/nvme/err_injection/err_injection.o 00:03:04.772 CC test/nvme/overhead/overhead.o 00:03:04.772 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:04.772 CXX test/cpp_headers/blob_bdev.o 00:03:04.772 CXX test/cpp_headers/blobfs_bdev.o 00:03:04.772 LINK vtophys 00:03:04.772 CC test/nvme/startup/startup.o 00:03:04.772 LINK histogram_perf 00:03:04.772 LINK env_dpdk_post_init 00:03:04.772 CXX test/cpp_headers/blobfs.o 00:03:04.772 CC test/nvme/reserve/reserve.o 00:03:04.772 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:04.772 CC test/nvme/simple_copy/simple_copy.o 00:03:04.772 CC test/nvme/connect_stress/connect_stress.o 00:03:04.772 CC test/nvme/boot_partition/boot_partition.o 00:03:04.772 CC test/nvme/compliance/nvme_compliance.o 00:03:04.772 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:04.772 CXX test/cpp_headers/blob.o 00:03:05.033 CC test/nvme/fused_ordering/fused_ordering.o 00:03:05.033 CXX test/cpp_headers/conf.o 00:03:05.033 LINK pmr_persistence 00:03:05.033 LINK jsoncat 00:03:05.033 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:05.033 LINK reset 00:03:05.033 CXX test/cpp_headers/config.o 00:03:05.033 CC test/nvme/cuse/cuse.o 00:03:05.033 CC test/nvme/fdp/fdp.o 00:03:05.033 CXX test/cpp_headers/cpuset.o 00:03:05.033 CXX test/cpp_headers/crc16.o 00:03:05.033 LINK mem_callbacks 00:03:05.033 CXX test/cpp_headers/crc32.o 00:03:05.033 CXX test/cpp_headers/crc64.o 00:03:05.033 LINK stub 00:03:05.033 CXX test/cpp_headers/dif.o 00:03:05.033 CXX test/cpp_headers/dma.o 00:03:05.033 CXX test/cpp_headers/endian.o 00:03:05.033 CXX test/cpp_headers/env_dpdk.o 00:03:05.033 LINK spdk_nvme_perf 00:03:05.033 LINK scheduler 00:03:05.033 LINK sgl 00:03:05.033 LINK err_injection 00:03:05.033 LINK abort 00:03:05.033 LINK spdk_top 00:03:05.033 CXX test/cpp_headers/env.o 00:03:05.033 CXX test/cpp_headers/event.o 00:03:05.033 LINK bdevperf 00:03:05.033 CXX test/cpp_headers/fd_group.o 00:03:05.033 CXX test/cpp_headers/fd.o 00:03:05.295 CXX test/cpp_headers/file.o 00:03:05.295 LINK nvme_dp 00:03:05.295 LINK spdk_nvme_identify 00:03:05.295 CXX test/cpp_headers/ftl.o 00:03:05.295 LINK startup 00:03:05.295 CXX test/cpp_headers/gpt_spec.o 00:03:05.295 LINK boot_partition 00:03:05.295 CXX test/cpp_headers/hexlify.o 00:03:05.295 LINK connect_stress 00:03:05.295 CXX test/cpp_headers/histogram_data.o 00:03:05.295 CXX test/cpp_headers/idxd.o 00:03:05.295 LINK overhead 00:03:05.295 CXX test/cpp_headers/idxd_spec.o 00:03:05.295 LINK fused_ordering 00:03:05.295 LINK reserve 00:03:05.295 CXX test/cpp_headers/init.o 00:03:05.295 LINK simple_copy 00:03:05.295 LINK doorbell_aers 00:03:05.295 LINK pci_ut 00:03:05.295 CXX test/cpp_headers/ioat.o 00:03:05.295 CXX test/cpp_headers/ioat_spec.o 00:03:05.295 CXX test/cpp_headers/iscsi_spec.o 00:03:05.295 CXX test/cpp_headers/json.o 00:03:05.295 CXX test/cpp_headers/jsonrpc.o 00:03:05.295 CXX test/cpp_headers/keyring.o 00:03:05.560 CXX test/cpp_headers/keyring_module.o 00:03:05.560 LINK nvme_fuzz 00:03:05.560 CXX test/cpp_headers/likely.o 00:03:05.560 LINK spdk_bdev 00:03:05.560 CXX test/cpp_headers/log.o 00:03:05.560 CXX test/cpp_headers/lvol.o 00:03:05.560 CXX test/cpp_headers/memory.o 00:03:05.560 CXX test/cpp_headers/mmio.o 00:03:05.560 CXX test/cpp_headers/nbd.o 00:03:05.560 CXX test/cpp_headers/notify.o 00:03:05.560 CXX test/cpp_headers/nvme.o 00:03:05.560 CXX test/cpp_headers/nvme_intel.o 00:03:05.560 CXX test/cpp_headers/nvme_ocssd.o 00:03:05.560 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:05.560 CXX test/cpp_headers/nvme_spec.o 00:03:05.560 CXX test/cpp_headers/nvme_zns.o 00:03:05.560 CXX test/cpp_headers/nvmf_cmd.o 00:03:05.560 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:05.560 CXX test/cpp_headers/nvmf.o 00:03:05.560 LINK nvme_compliance 00:03:05.560 CXX test/cpp_headers/nvmf_spec.o 00:03:05.560 CXX test/cpp_headers/nvmf_transport.o 00:03:05.560 CXX test/cpp_headers/opal.o 00:03:05.560 CXX test/cpp_headers/opal_spec.o 00:03:05.560 CXX test/cpp_headers/pci_ids.o 00:03:05.560 CXX test/cpp_headers/pipe.o 00:03:05.560 CXX test/cpp_headers/queue.o 00:03:05.560 LINK fdp 00:03:05.560 CXX test/cpp_headers/reduce.o 00:03:05.560 CXX test/cpp_headers/rpc.o 00:03:05.560 CXX test/cpp_headers/scheduler.o 00:03:05.560 CXX test/cpp_headers/scsi.o 00:03:05.560 CXX test/cpp_headers/scsi_spec.o 00:03:05.560 CXX test/cpp_headers/sock.o 00:03:05.560 CXX test/cpp_headers/stdinc.o 00:03:05.820 CXX test/cpp_headers/thread.o 00:03:05.820 CXX test/cpp_headers/string.o 00:03:05.820 CXX test/cpp_headers/trace.o 00:03:05.820 CXX test/cpp_headers/trace_parser.o 00:03:05.820 LINK vhost_fuzz 00:03:05.820 CXX test/cpp_headers/tree.o 00:03:05.820 CXX test/cpp_headers/ublk.o 00:03:05.820 CXX test/cpp_headers/util.o 00:03:05.820 CXX test/cpp_headers/uuid.o 00:03:05.820 CXX test/cpp_headers/version.o 00:03:05.820 CXX test/cpp_headers/vfio_user_pci.o 00:03:05.820 CXX test/cpp_headers/vfio_user_spec.o 00:03:05.820 CXX test/cpp_headers/vhost.o 00:03:05.820 CXX test/cpp_headers/vmd.o 00:03:05.820 CXX test/cpp_headers/xor.o 00:03:05.820 CXX test/cpp_headers/zipf.o 00:03:06.078 LINK memory_ut 00:03:06.642 LINK cuse 00:03:07.078 LINK iscsi_fuzz 00:03:09.607 LINK esnap 00:03:09.607 00:03:09.607 real 0m39.936s 00:03:09.607 user 7m35.079s 00:03:09.607 sys 1m47.953s 00:03:09.607 14:43:55 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:03:09.607 14:43:55 -- common/autotest_common.sh@10 -- $ set +x 00:03:09.607 ************************************ 00:03:09.607 END TEST make 00:03:09.607 ************************************ 00:03:09.607 14:43:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:09.607 14:43:55 -- pm/common@30 -- $ signal_monitor_resources TERM 00:03:09.607 14:43:55 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:03:09.607 14:43:55 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.607 14:43:55 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:09.607 14:43:55 -- pm/common@45 -- $ pid=3539765 00:03:09.608 14:43:55 -- pm/common@52 -- $ sudo kill -TERM 3539765 00:03:09.608 14:43:55 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.608 14:43:55 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:09.608 14:43:55 -- pm/common@45 -- $ pid=3539766 00:03:09.608 14:43:55 -- pm/common@52 -- $ sudo kill -TERM 3539766 00:03:09.608 14:43:55 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.608 14:43:55 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:09.608 14:43:55 -- pm/common@45 -- $ pid=3539768 00:03:09.608 14:43:55 -- pm/common@52 -- $ sudo kill -TERM 3539768 00:03:09.608 14:43:55 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.608 14:43:55 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:09.608 14:43:55 -- pm/common@45 -- $ pid=3539767 00:03:09.608 14:43:55 -- pm/common@52 -- $ sudo kill -TERM 3539767 00:03:09.866 14:43:55 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:09.866 14:43:55 -- nvmf/common.sh@7 -- # uname -s 00:03:09.866 14:43:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:09.866 14:43:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:09.866 14:43:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:09.866 14:43:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:09.866 14:43:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:09.866 14:43:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:09.866 14:43:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:09.866 14:43:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:09.866 14:43:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:09.866 14:43:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:09.866 14:43:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:03:09.866 14:43:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:03:09.866 14:43:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:09.866 14:43:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:09.866 14:43:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:09.866 14:43:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:09.866 14:43:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:09.866 14:43:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:09.866 14:43:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:09.866 14:43:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:09.866 14:43:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.866 14:43:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.866 14:43:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.866 14:43:55 -- paths/export.sh@5 -- # export PATH 00:03:09.866 14:43:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.866 14:43:55 -- nvmf/common.sh@47 -- # : 0 00:03:09.866 14:43:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:09.866 14:43:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:09.866 14:43:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:09.866 14:43:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:09.866 14:43:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:09.866 14:43:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:09.866 14:43:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:09.866 14:43:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:09.866 14:43:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:09.866 14:43:55 -- spdk/autotest.sh@32 -- # uname -s 00:03:09.866 14:43:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:09.866 14:43:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:09.866 14:43:55 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:09.866 14:43:55 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:09.866 14:43:55 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:09.866 14:43:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:09.866 14:43:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:09.866 14:43:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:09.866 14:43:55 -- spdk/autotest.sh@48 -- # udevadm_pid=3616793 00:03:09.866 14:43:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:09.866 14:43:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:09.866 14:43:55 -- pm/common@17 -- # local monitor 00:03:09.866 14:43:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.866 14:43:55 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3616794 00:03:09.866 14:43:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.866 14:43:55 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3616797 00:03:09.866 14:43:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.866 14:43:55 -- pm/common@21 -- # date +%s 00:03:09.866 14:43:55 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3616800 00:03:09.866 14:43:55 -- pm/common@21 -- # date +%s 00:03:09.866 14:43:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.866 14:43:55 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3616803 00:03:09.866 14:43:55 -- pm/common@21 -- # date +%s 00:03:09.866 14:43:55 -- pm/common@26 -- # sleep 1 00:03:09.866 14:43:55 -- pm/common@21 -- # date +%s 00:03:09.866 14:43:55 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714135435 00:03:09.866 14:43:55 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714135435 00:03:09.866 14:43:55 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714135435 00:03:09.866 14:43:55 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714135435 00:03:09.866 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714135435_collect-vmstat.pm.log 00:03:09.867 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714135435_collect-bmc-pm.bmc.pm.log 00:03:09.867 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714135435_collect-cpu-load.pm.log 00:03:09.867 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714135435_collect-cpu-temp.pm.log 00:03:10.802 14:43:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:10.802 14:43:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:10.802 14:43:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:10.802 14:43:56 -- common/autotest_common.sh@10 -- # set +x 00:03:10.802 14:43:56 -- spdk/autotest.sh@59 -- # create_test_list 00:03:10.802 14:43:56 -- common/autotest_common.sh@734 -- # xtrace_disable 00:03:10.802 14:43:56 -- common/autotest_common.sh@10 -- # set +x 00:03:10.802 14:43:56 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:10.802 14:43:56 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:10.802 14:43:56 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:10.802 14:43:56 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:10.802 14:43:56 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:10.802 14:43:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:10.802 14:43:56 -- common/autotest_common.sh@1441 -- # uname 00:03:10.802 14:43:56 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:03:10.802 14:43:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:10.802 14:43:56 -- common/autotest_common.sh@1461 -- # uname 00:03:10.802 14:43:56 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:03:10.802 14:43:56 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:10.802 14:43:56 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:10.802 14:43:56 -- spdk/autotest.sh@72 -- # hash lcov 00:03:10.802 14:43:56 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:10.802 14:43:56 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:10.802 --rc lcov_branch_coverage=1 00:03:10.802 --rc lcov_function_coverage=1 00:03:10.802 --rc genhtml_branch_coverage=1 00:03:10.802 --rc genhtml_function_coverage=1 00:03:10.802 --rc genhtml_legend=1 00:03:10.802 --rc geninfo_all_blocks=1 00:03:10.802 ' 00:03:10.802 14:43:56 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:10.802 --rc lcov_branch_coverage=1 00:03:10.802 --rc lcov_function_coverage=1 00:03:10.802 --rc genhtml_branch_coverage=1 00:03:10.802 --rc genhtml_function_coverage=1 00:03:10.802 --rc genhtml_legend=1 00:03:10.802 --rc geninfo_all_blocks=1 00:03:10.802 ' 00:03:10.802 14:43:56 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:10.802 --rc lcov_branch_coverage=1 00:03:10.802 --rc lcov_function_coverage=1 00:03:10.802 --rc genhtml_branch_coverage=1 00:03:10.802 --rc genhtml_function_coverage=1 00:03:10.802 --rc genhtml_legend=1 00:03:10.802 --rc geninfo_all_blocks=1 00:03:10.802 --no-external' 00:03:10.802 14:43:56 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:10.802 --rc lcov_branch_coverage=1 00:03:10.802 --rc lcov_function_coverage=1 00:03:10.802 --rc genhtml_branch_coverage=1 00:03:10.802 --rc genhtml_function_coverage=1 00:03:10.802 --rc genhtml_legend=1 00:03:10.802 --rc geninfo_all_blocks=1 00:03:10.802 --no-external' 00:03:10.803 14:43:56 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:10.803 lcov: LCOV version 1.14 00:03:10.803 14:43:56 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:20.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:20.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:20.816 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:20.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:20.816 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:20.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:20.816 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:20.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:20.816 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:20.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:20.816 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:20.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:20.816 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:20.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:20.816 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:20.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:20.816 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:20.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:20.816 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:20.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:20.816 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:20.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:20.816 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:20.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:20.816 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:20.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:20.816 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:20.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:20.816 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:20.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:20.816 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:20.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:20.816 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:26.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:26.075 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:38.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:38.266 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:38.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:38.266 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:38.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:38.266 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:46.380 14:44:31 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:46.380 14:44:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:46.380 14:44:31 -- common/autotest_common.sh@10 -- # set +x 00:03:46.380 14:44:31 -- spdk/autotest.sh@91 -- # rm -f 00:03:46.380 14:44:31 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.636 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:03:46.637 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:46.637 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:46.637 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:46.637 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:46.637 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:46.894 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:46.894 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:46.894 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:46.894 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:46.894 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:46.894 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:46.894 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:46.894 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:46.894 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:46.894 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:46.894 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:46.894 14:44:32 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:46.894 14:44:32 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:46.894 14:44:32 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:46.894 14:44:32 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:46.894 14:44:32 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:46.894 14:44:32 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:46.894 14:44:32 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:46.894 14:44:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:46.894 14:44:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:46.894 14:44:32 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:46.894 14:44:32 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:46.894 14:44:32 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:46.894 14:44:32 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:46.894 14:44:32 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:46.894 14:44:32 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:46.894 No valid GPT data, bailing 00:03:47.152 14:44:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:47.152 14:44:32 -- scripts/common.sh@391 -- # pt= 00:03:47.152 14:44:32 -- scripts/common.sh@392 -- # return 1 00:03:47.152 14:44:32 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:47.152 1+0 records in 00:03:47.152 1+0 records out 00:03:47.152 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00229018 s, 458 MB/s 00:03:47.152 14:44:32 -- spdk/autotest.sh@118 -- # sync 00:03:47.152 14:44:32 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:47.152 14:44:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:47.153 14:44:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:49.095 14:44:34 -- spdk/autotest.sh@124 -- # uname -s 00:03:49.095 14:44:34 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:49.095 14:44:34 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:49.095 14:44:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:49.095 14:44:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:49.096 14:44:34 -- common/autotest_common.sh@10 -- # set +x 00:03:49.096 ************************************ 00:03:49.096 START TEST setup.sh 00:03:49.096 ************************************ 00:03:49.096 14:44:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:49.096 * Looking for test storage... 00:03:49.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:49.096 14:44:34 -- setup/test-setup.sh@10 -- # uname -s 00:03:49.096 14:44:34 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:49.096 14:44:34 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:49.096 14:44:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:49.096 14:44:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:49.096 14:44:34 -- common/autotest_common.sh@10 -- # set +x 00:03:49.096 ************************************ 00:03:49.096 START TEST acl 00:03:49.096 ************************************ 00:03:49.096 14:44:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:49.353 * Looking for test storage... 00:03:49.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:49.353 14:44:34 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:49.353 14:44:34 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:49.353 14:44:34 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:49.353 14:44:34 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:49.353 14:44:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:49.353 14:44:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:49.353 14:44:34 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:49.353 14:44:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:49.353 14:44:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:49.353 14:44:34 -- setup/acl.sh@12 -- # devs=() 00:03:49.353 14:44:34 -- setup/acl.sh@12 -- # declare -a devs 00:03:49.353 14:44:34 -- setup/acl.sh@13 -- # drivers=() 00:03:49.353 14:44:34 -- setup/acl.sh@13 -- # declare -A drivers 00:03:49.353 14:44:34 -- setup/acl.sh@51 -- # setup reset 00:03:49.353 14:44:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.353 14:44:34 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.728 14:44:36 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:50.728 14:44:36 -- setup/acl.sh@16 -- # local dev driver 00:03:50.728 14:44:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.728 14:44:36 -- setup/acl.sh@15 -- # setup output status 00:03:50.728 14:44:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.728 14:44:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:51.662 Hugepages 00:03:51.662 node hugesize free / total 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 00:03:51.662 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # continue 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@19 -- # [[ 0000:82:00.0 == *:*:*.* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:51.662 14:44:37 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:03:51.662 14:44:37 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:51.662 14:44:37 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:51.662 14:44:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.662 14:44:37 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:51.662 14:44:37 -- setup/acl.sh@54 -- # run_test denied denied 00:03:51.662 14:44:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.662 14:44:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.662 14:44:37 -- common/autotest_common.sh@10 -- # set +x 00:03:51.921 ************************************ 00:03:51.921 START TEST denied 00:03:51.921 ************************************ 00:03:51.921 14:44:37 -- common/autotest_common.sh@1111 -- # denied 00:03:51.921 14:44:37 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:82:00.0' 00:03:51.921 14:44:37 -- setup/acl.sh@38 -- # setup output config 00:03:51.921 14:44:37 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:82:00.0' 00:03:51.921 14:44:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.921 14:44:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:53.297 0000:82:00.0 (8086 0a54): Skipping denied controller at 0000:82:00.0 00:03:53.297 14:44:38 -- setup/acl.sh@40 -- # verify 0000:82:00.0 00:03:53.297 14:44:38 -- setup/acl.sh@28 -- # local dev driver 00:03:53.297 14:44:38 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:53.297 14:44:38 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:82:00.0 ]] 00:03:53.297 14:44:38 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:82:00.0/driver 00:03:53.297 14:44:38 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:53.297 14:44:38 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:53.297 14:44:38 -- setup/acl.sh@41 -- # setup reset 00:03:53.297 14:44:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.297 14:44:38 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.829 00:03:55.829 real 0m3.676s 00:03:55.829 user 0m1.103s 00:03:55.829 sys 0m1.776s 00:03:55.829 14:44:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:55.829 14:44:41 -- common/autotest_common.sh@10 -- # set +x 00:03:55.829 ************************************ 00:03:55.829 END TEST denied 00:03:55.829 ************************************ 00:03:55.829 14:44:41 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:55.829 14:44:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:55.829 14:44:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:55.829 14:44:41 -- common/autotest_common.sh@10 -- # set +x 00:03:55.829 ************************************ 00:03:55.829 START TEST allowed 00:03:55.829 ************************************ 00:03:55.829 14:44:41 -- common/autotest_common.sh@1111 -- # allowed 00:03:55.829 14:44:41 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:82:00.0 00:03:55.829 14:44:41 -- setup/acl.sh@45 -- # setup output config 00:03:55.829 14:44:41 -- setup/acl.sh@46 -- # grep -E '0000:82:00.0 .*: nvme -> .*' 00:03:55.829 14:44:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.829 14:44:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:58.355 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:03:58.355 14:44:43 -- setup/acl.sh@47 -- # verify 00:03:58.355 14:44:43 -- setup/acl.sh@28 -- # local dev driver 00:03:58.355 14:44:43 -- setup/acl.sh@48 -- # setup reset 00:03:58.355 14:44:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.355 14:44:43 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.289 00:03:59.289 real 0m3.725s 00:03:59.289 user 0m0.956s 00:03:59.289 sys 0m1.649s 00:03:59.289 14:44:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:59.289 14:44:44 -- common/autotest_common.sh@10 -- # set +x 00:03:59.289 ************************************ 00:03:59.289 END TEST allowed 00:03:59.289 ************************************ 00:03:59.289 00:03:59.289 real 0m10.212s 00:03:59.289 user 0m3.134s 00:03:59.289 sys 0m5.225s 00:03:59.289 14:44:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:59.289 14:44:45 -- common/autotest_common.sh@10 -- # set +x 00:03:59.289 ************************************ 00:03:59.289 END TEST acl 00:03:59.289 ************************************ 00:03:59.289 14:44:45 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:59.289 14:44:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.289 14:44:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.289 14:44:45 -- common/autotest_common.sh@10 -- # set +x 00:03:59.548 ************************************ 00:03:59.548 START TEST hugepages 00:03:59.548 ************************************ 00:03:59.548 14:44:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:59.548 * Looking for test storage... 00:03:59.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:59.548 14:44:45 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:59.548 14:44:45 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:59.548 14:44:45 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:59.548 14:44:45 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:59.549 14:44:45 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:59.549 14:44:45 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:59.549 14:44:45 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:59.549 14:44:45 -- setup/common.sh@18 -- # local node= 00:03:59.549 14:44:45 -- setup/common.sh@19 -- # local var val 00:03:59.549 14:44:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.549 14:44:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.549 14:44:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.549 14:44:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.549 14:44:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.549 14:44:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 24265292 kB' 'MemAvailable: 28015212 kB' 'Buffers: 2696 kB' 'Cached: 13122832 kB' 'SwapCached: 0 kB' 'Active: 10107952 kB' 'Inactive: 3494336 kB' 'Active(anon): 9542168 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479656 kB' 'Mapped: 213368 kB' 'Shmem: 9065408 kB' 'KReclaimable: 203680 kB' 'Slab: 555000 kB' 'SReclaimable: 203680 kB' 'SUnreclaim: 351320 kB' 'KernelStack: 12704 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28304780 kB' 'Committed_AS: 10649352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195764 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.549 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.549 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # continue 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.550 14:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.550 14:44:45 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.550 14:44:45 -- setup/common.sh@33 -- # echo 2048 00:03:59.550 14:44:45 -- setup/common.sh@33 -- # return 0 00:03:59.550 14:44:45 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:59.550 14:44:45 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:59.550 14:44:45 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:59.550 14:44:45 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:59.550 14:44:45 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:59.550 14:44:45 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:59.550 14:44:45 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:59.550 14:44:45 -- setup/hugepages.sh@207 -- # get_nodes 00:03:59.550 14:44:45 -- setup/hugepages.sh@27 -- # local node 00:03:59.550 14:44:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.550 14:44:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:59.550 14:44:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.550 14:44:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:59.550 14:44:45 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.550 14:44:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.550 14:44:45 -- setup/hugepages.sh@208 -- # clear_hp 00:03:59.550 14:44:45 -- setup/hugepages.sh@37 -- # local node hp 00:03:59.550 14:44:45 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.550 14:44:45 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.550 14:44:45 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.550 14:44:45 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.550 14:44:45 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.550 14:44:45 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.550 14:44:45 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.550 14:44:45 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.550 14:44:45 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.550 14:44:45 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.550 14:44:45 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:59.550 14:44:45 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:59.550 14:44:45 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:59.550 14:44:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.550 14:44:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.550 14:44:45 -- common/autotest_common.sh@10 -- # set +x 00:03:59.808 ************************************ 00:03:59.808 START TEST default_setup 00:03:59.808 ************************************ 00:03:59.808 14:44:45 -- common/autotest_common.sh@1111 -- # default_setup 00:03:59.808 14:44:45 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:59.808 14:44:45 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.808 14:44:45 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:59.808 14:44:45 -- setup/hugepages.sh@51 -- # shift 00:03:59.808 14:44:45 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:59.808 14:44:45 -- setup/hugepages.sh@52 -- # local node_ids 00:03:59.808 14:44:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.808 14:44:45 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.808 14:44:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:59.808 14:44:45 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:59.808 14:44:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.808 14:44:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.808 14:44:45 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.808 14:44:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.808 14:44:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.808 14:44:45 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:59.808 14:44:45 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.808 14:44:45 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:59.808 14:44:45 -- setup/hugepages.sh@73 -- # return 0 00:03:59.808 14:44:45 -- setup/hugepages.sh@137 -- # setup output 00:03:59.808 14:44:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.808 14:44:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:01.178 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:01.178 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:01.178 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:01.178 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:01.178 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:01.178 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:01.178 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:01.178 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:01.178 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:01.178 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:01.178 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:01.178 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:01.178 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:01.178 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:01.178 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:01.178 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:02.113 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:02.113 14:44:47 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:02.113 14:44:47 -- setup/hugepages.sh@89 -- # local node 00:04:02.113 14:44:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.113 14:44:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.113 14:44:47 -- setup/hugepages.sh@92 -- # local surp 00:04:02.113 14:44:47 -- setup/hugepages.sh@93 -- # local resv 00:04:02.113 14:44:47 -- setup/hugepages.sh@94 -- # local anon 00:04:02.113 14:44:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.113 14:44:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.113 14:44:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.113 14:44:47 -- setup/common.sh@18 -- # local node= 00:04:02.113 14:44:47 -- setup/common.sh@19 -- # local var val 00:04:02.113 14:44:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.113 14:44:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.113 14:44:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.113 14:44:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.113 14:44:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.113 14:44:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26366440 kB' 'MemAvailable: 30116344 kB' 'Buffers: 2696 kB' 'Cached: 13122924 kB' 'SwapCached: 0 kB' 'Active: 10128396 kB' 'Inactive: 3494336 kB' 'Active(anon): 9562612 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499916 kB' 'Mapped: 213480 kB' 'Shmem: 9065500 kB' 'KReclaimable: 203648 kB' 'Slab: 554080 kB' 'SReclaimable: 203648 kB' 'SUnreclaim: 350432 kB' 'KernelStack: 12864 kB' 'PageTables: 9708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10671488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196052 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 14:44:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.114 14:44:47 -- setup/common.sh@33 -- # echo 0 00:04:02.114 14:44:47 -- setup/common.sh@33 -- # return 0 00:04:02.114 14:44:47 -- setup/hugepages.sh@97 -- # anon=0 00:04:02.114 14:44:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.114 14:44:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.114 14:44:47 -- setup/common.sh@18 -- # local node= 00:04:02.114 14:44:47 -- setup/common.sh@19 -- # local var val 00:04:02.114 14:44:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.114 14:44:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.114 14:44:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.114 14:44:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.114 14:44:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.114 14:44:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26366860 kB' 'MemAvailable: 30116764 kB' 'Buffers: 2696 kB' 'Cached: 13122924 kB' 'SwapCached: 0 kB' 'Active: 10129136 kB' 'Inactive: 3494336 kB' 'Active(anon): 9563352 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500648 kB' 'Mapped: 213480 kB' 'Shmem: 9065500 kB' 'KReclaimable: 203648 kB' 'Slab: 554080 kB' 'SReclaimable: 203648 kB' 'SUnreclaim: 350432 kB' 'KernelStack: 12848 kB' 'PageTables: 9796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10669124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195988 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 14:44:47 -- setup/common.sh@33 -- # echo 0 00:04:02.115 14:44:47 -- setup/common.sh@33 -- # return 0 00:04:02.115 14:44:47 -- setup/hugepages.sh@99 -- # surp=0 00:04:02.115 14:44:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.115 14:44:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.115 14:44:47 -- setup/common.sh@18 -- # local node= 00:04:02.115 14:44:47 -- setup/common.sh@19 -- # local var val 00:04:02.115 14:44:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.115 14:44:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.115 14:44:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.115 14:44:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.115 14:44:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.115 14:44:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26367736 kB' 'MemAvailable: 30117640 kB' 'Buffers: 2696 kB' 'Cached: 13122928 kB' 'SwapCached: 0 kB' 'Active: 10125832 kB' 'Inactive: 3494336 kB' 'Active(anon): 9560048 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497788 kB' 'Mapped: 213448 kB' 'Shmem: 9065504 kB' 'KReclaimable: 203648 kB' 'Slab: 554072 kB' 'SReclaimable: 203648 kB' 'SUnreclaim: 350424 kB' 'KernelStack: 12512 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10669140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195860 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.115 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.116 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.116 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.117 14:44:47 -- setup/common.sh@33 -- # echo 0 00:04:02.117 14:44:47 -- setup/common.sh@33 -- # return 0 00:04:02.117 14:44:47 -- setup/hugepages.sh@100 -- # resv=0 00:04:02.117 14:44:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.117 nr_hugepages=1024 00:04:02.117 14:44:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.117 resv_hugepages=0 00:04:02.117 14:44:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.117 surplus_hugepages=0 00:04:02.117 14:44:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.117 anon_hugepages=0 00:04:02.117 14:44:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.117 14:44:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.117 14:44:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.117 14:44:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.117 14:44:47 -- setup/common.sh@18 -- # local node= 00:04:02.117 14:44:47 -- setup/common.sh@19 -- # local var val 00:04:02.117 14:44:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.117 14:44:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.117 14:44:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.117 14:44:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.117 14:44:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.117 14:44:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26369044 kB' 'MemAvailable: 30118948 kB' 'Buffers: 2696 kB' 'Cached: 13122952 kB' 'SwapCached: 0 kB' 'Active: 10126020 kB' 'Inactive: 3494336 kB' 'Active(anon): 9560236 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498004 kB' 'Mapped: 213416 kB' 'Shmem: 9065528 kB' 'KReclaimable: 203648 kB' 'Slab: 554104 kB' 'SReclaimable: 203648 kB' 'SUnreclaim: 350456 kB' 'KernelStack: 12544 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10668784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195876 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.117 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.117 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.118 14:44:47 -- setup/common.sh@33 -- # echo 1024 00:04:02.118 14:44:47 -- setup/common.sh@33 -- # return 0 00:04:02.118 14:44:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.118 14:44:47 -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.118 14:44:47 -- setup/hugepages.sh@27 -- # local node 00:04:02.118 14:44:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.118 14:44:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.118 14:44:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.118 14:44:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:02.118 14:44:47 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.118 14:44:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.118 14:44:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.118 14:44:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.118 14:44:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.118 14:44:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.118 14:44:47 -- setup/common.sh@18 -- # local node=0 00:04:02.118 14:44:47 -- setup/common.sh@19 -- # local var val 00:04:02.118 14:44:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.118 14:44:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.118 14:44:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.118 14:44:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.118 14:44:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.118 14:44:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 18627320 kB' 'MemUsed: 5992092 kB' 'SwapCached: 0 kB' 'Active: 2909112 kB' 'Inactive: 148128 kB' 'Active(anon): 2707512 kB' 'Inactive(anon): 0 kB' 'Active(file): 201600 kB' 'Inactive(file): 148128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2840752 kB' 'Mapped: 42780 kB' 'AnonPages: 219640 kB' 'Shmem: 2491024 kB' 'KernelStack: 6632 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85672 kB' 'Slab: 248916 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 163244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.118 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.118 14:44:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # continue 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.119 14:44:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.119 14:44:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.119 14:44:47 -- setup/common.sh@33 -- # echo 0 00:04:02.119 14:44:47 -- setup/common.sh@33 -- # return 0 00:04:02.119 14:44:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.119 14:44:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.119 14:44:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.119 14:44:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.119 14:44:47 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:02.119 node0=1024 expecting 1024 00:04:02.119 14:44:47 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.119 00:04:02.119 real 0m2.467s 00:04:02.119 user 0m0.643s 00:04:02.119 sys 0m0.804s 00:04:02.119 14:44:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:02.119 14:44:47 -- common/autotest_common.sh@10 -- # set +x 00:04:02.119 ************************************ 00:04:02.119 END TEST default_setup 00:04:02.119 ************************************ 00:04:02.119 14:44:47 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:02.119 14:44:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.119 14:44:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.119 14:44:47 -- common/autotest_common.sh@10 -- # set +x 00:04:02.377 ************************************ 00:04:02.377 START TEST per_node_1G_alloc 00:04:02.377 ************************************ 00:04:02.377 14:44:47 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:04:02.377 14:44:47 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:02.377 14:44:47 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:02.377 14:44:47 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:02.377 14:44:47 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:02.377 14:44:47 -- setup/hugepages.sh@51 -- # shift 00:04:02.377 14:44:47 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:02.377 14:44:47 -- setup/hugepages.sh@52 -- # local node_ids 00:04:02.377 14:44:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.377 14:44:47 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:02.377 14:44:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:02.377 14:44:47 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:02.377 14:44:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.377 14:44:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:02.377 14:44:47 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.377 14:44:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.377 14:44:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.377 14:44:47 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:02.377 14:44:47 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.377 14:44:47 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:02.377 14:44:47 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.377 14:44:47 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:02.377 14:44:47 -- setup/hugepages.sh@73 -- # return 0 00:04:02.377 14:44:47 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:02.377 14:44:47 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:02.377 14:44:47 -- setup/hugepages.sh@146 -- # setup output 00:04:02.377 14:44:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.377 14:44:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.309 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:03.309 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:03.309 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:03.309 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:03.309 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:03.309 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:03.309 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:03.309 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:03.309 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:03.309 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:03.309 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:03.309 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:03.309 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:03.309 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:03.309 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:03.309 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:03.309 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:03.572 14:44:49 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:03.572 14:44:49 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:03.572 14:44:49 -- setup/hugepages.sh@89 -- # local node 00:04:03.572 14:44:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.572 14:44:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.572 14:44:49 -- setup/hugepages.sh@92 -- # local surp 00:04:03.572 14:44:49 -- setup/hugepages.sh@93 -- # local resv 00:04:03.572 14:44:49 -- setup/hugepages.sh@94 -- # local anon 00:04:03.572 14:44:49 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.572 14:44:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.572 14:44:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.572 14:44:49 -- setup/common.sh@18 -- # local node= 00:04:03.572 14:44:49 -- setup/common.sh@19 -- # local var val 00:04:03.572 14:44:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.572 14:44:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.572 14:44:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.572 14:44:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.572 14:44:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.572 14:44:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.572 14:44:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26358348 kB' 'MemAvailable: 30108252 kB' 'Buffers: 2696 kB' 'Cached: 13123016 kB' 'SwapCached: 0 kB' 'Active: 10128328 kB' 'Inactive: 3494336 kB' 'Active(anon): 9562544 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500244 kB' 'Mapped: 214348 kB' 'Shmem: 9065592 kB' 'KReclaimable: 203648 kB' 'Slab: 554248 kB' 'SReclaimable: 203648 kB' 'SUnreclaim: 350600 kB' 'KernelStack: 12512 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10672108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195892 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.572 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.572 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.573 14:44:49 -- setup/common.sh@33 -- # echo 0 00:04:03.573 14:44:49 -- setup/common.sh@33 -- # return 0 00:04:03.573 14:44:49 -- setup/hugepages.sh@97 -- # anon=0 00:04:03.573 14:44:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.573 14:44:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.573 14:44:49 -- setup/common.sh@18 -- # local node= 00:04:03.573 14:44:49 -- setup/common.sh@19 -- # local var val 00:04:03.573 14:44:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.573 14:44:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.573 14:44:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.573 14:44:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.573 14:44:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.573 14:44:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26363408 kB' 'MemAvailable: 30113312 kB' 'Buffers: 2696 kB' 'Cached: 13123024 kB' 'SwapCached: 0 kB' 'Active: 10131424 kB' 'Inactive: 3494336 kB' 'Active(anon): 9565640 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503488 kB' 'Mapped: 214216 kB' 'Shmem: 9065600 kB' 'KReclaimable: 203648 kB' 'Slab: 554268 kB' 'SReclaimable: 203648 kB' 'SUnreclaim: 350620 kB' 'KernelStack: 12560 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10675300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195864 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.573 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.573 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.574 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.574 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.575 14:44:49 -- setup/common.sh@33 -- # echo 0 00:04:03.575 14:44:49 -- setup/common.sh@33 -- # return 0 00:04:03.575 14:44:49 -- setup/hugepages.sh@99 -- # surp=0 00:04:03.575 14:44:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.575 14:44:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.575 14:44:49 -- setup/common.sh@18 -- # local node= 00:04:03.575 14:44:49 -- setup/common.sh@19 -- # local var val 00:04:03.575 14:44:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.575 14:44:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.575 14:44:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.575 14:44:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.575 14:44:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.575 14:44:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26363156 kB' 'MemAvailable: 30113060 kB' 'Buffers: 2696 kB' 'Cached: 13123032 kB' 'SwapCached: 0 kB' 'Active: 10127764 kB' 'Inactive: 3494336 kB' 'Active(anon): 9561980 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498036 kB' 'Mapped: 213864 kB' 'Shmem: 9065608 kB' 'KReclaimable: 203648 kB' 'Slab: 554268 kB' 'SReclaimable: 203648 kB' 'SUnreclaim: 350620 kB' 'KernelStack: 12544 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10674376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195860 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.575 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.575 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.576 14:44:49 -- setup/common.sh@33 -- # echo 0 00:04:03.576 14:44:49 -- setup/common.sh@33 -- # return 0 00:04:03.576 14:44:49 -- setup/hugepages.sh@100 -- # resv=0 00:04:03.576 14:44:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.576 nr_hugepages=1024 00:04:03.576 14:44:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.576 resv_hugepages=0 00:04:03.576 14:44:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.576 surplus_hugepages=0 00:04:03.576 14:44:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.576 anon_hugepages=0 00:04:03.576 14:44:49 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.576 14:44:49 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.576 14:44:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.576 14:44:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.576 14:44:49 -- setup/common.sh@18 -- # local node= 00:04:03.576 14:44:49 -- setup/common.sh@19 -- # local var val 00:04:03.576 14:44:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.576 14:44:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.576 14:44:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.576 14:44:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.576 14:44:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.576 14:44:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26363792 kB' 'MemAvailable: 30113696 kB' 'Buffers: 2696 kB' 'Cached: 13123048 kB' 'SwapCached: 0 kB' 'Active: 10126012 kB' 'Inactive: 3494336 kB' 'Active(anon): 9560228 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498036 kB' 'Mapped: 213448 kB' 'Shmem: 9065624 kB' 'KReclaimable: 203648 kB' 'Slab: 554268 kB' 'SReclaimable: 203648 kB' 'SUnreclaim: 350620 kB' 'KernelStack: 12528 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10669208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195844 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.576 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.576 14:44:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.577 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.577 14:44:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.578 14:44:49 -- setup/common.sh@33 -- # echo 1024 00:04:03.578 14:44:49 -- setup/common.sh@33 -- # return 0 00:04:03.578 14:44:49 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.578 14:44:49 -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.578 14:44:49 -- setup/hugepages.sh@27 -- # local node 00:04:03.578 14:44:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.578 14:44:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.578 14:44:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.578 14:44:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.578 14:44:49 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.578 14:44:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.578 14:44:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.578 14:44:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.578 14:44:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.578 14:44:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.578 14:44:49 -- setup/common.sh@18 -- # local node=0 00:04:03.578 14:44:49 -- setup/common.sh@19 -- # local var val 00:04:03.578 14:44:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.578 14:44:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.578 14:44:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.578 14:44:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.578 14:44:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.578 14:44:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 19676300 kB' 'MemUsed: 4943112 kB' 'SwapCached: 0 kB' 'Active: 2908800 kB' 'Inactive: 148128 kB' 'Active(anon): 2707200 kB' 'Inactive(anon): 0 kB' 'Active(file): 201600 kB' 'Inactive(file): 148128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2840788 kB' 'Mapped: 42780 kB' 'AnonPages: 219424 kB' 'Shmem: 2491060 kB' 'KernelStack: 6616 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85672 kB' 'Slab: 249068 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 163396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.578 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.578 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@33 -- # echo 0 00:04:03.579 14:44:49 -- setup/common.sh@33 -- # return 0 00:04:03.579 14:44:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.579 14:44:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.579 14:44:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.579 14:44:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:03.579 14:44:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.579 14:44:49 -- setup/common.sh@18 -- # local node=1 00:04:03.579 14:44:49 -- setup/common.sh@19 -- # local var val 00:04:03.579 14:44:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.579 14:44:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.579 14:44:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:03.579 14:44:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:03.579 14:44:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.579 14:44:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 6687880 kB' 'MemUsed: 12719364 kB' 'SwapCached: 0 kB' 'Active: 7216984 kB' 'Inactive: 3346208 kB' 'Active(anon): 6852800 kB' 'Inactive(anon): 0 kB' 'Active(file): 364184 kB' 'Inactive(file): 3346208 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10284968 kB' 'Mapped: 170668 kB' 'AnonPages: 278384 kB' 'Shmem: 6574576 kB' 'KernelStack: 5944 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117976 kB' 'Slab: 305200 kB' 'SReclaimable: 117976 kB' 'SUnreclaim: 187224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.579 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.579 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # continue 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.580 14:44:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.580 14:44:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.580 14:44:49 -- setup/common.sh@33 -- # echo 0 00:04:03.580 14:44:49 -- setup/common.sh@33 -- # return 0 00:04:03.580 14:44:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.580 14:44:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.580 14:44:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.580 14:44:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.580 14:44:49 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:03.580 node0=512 expecting 512 00:04:03.580 14:44:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.580 14:44:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.580 14:44:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.580 14:44:49 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:03.580 node1=512 expecting 512 00:04:03.580 14:44:49 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:03.580 00:04:03.580 real 0m1.376s 00:04:03.580 user 0m0.593s 00:04:03.580 sys 0m0.754s 00:04:03.580 14:44:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:03.580 14:44:49 -- common/autotest_common.sh@10 -- # set +x 00:04:03.580 ************************************ 00:04:03.580 END TEST per_node_1G_alloc 00:04:03.580 ************************************ 00:04:03.838 14:44:49 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:03.838 14:44:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:03.838 14:44:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.838 14:44:49 -- common/autotest_common.sh@10 -- # set +x 00:04:03.838 ************************************ 00:04:03.838 START TEST even_2G_alloc 00:04:03.838 ************************************ 00:04:03.838 14:44:49 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:04:03.838 14:44:49 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:03.838 14:44:49 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.838 14:44:49 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.838 14:44:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.838 14:44:49 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.838 14:44:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.838 14:44:49 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.838 14:44:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.838 14:44:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.838 14:44:49 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.838 14:44:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.838 14:44:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.838 14:44:49 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.838 14:44:49 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:03.838 14:44:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.838 14:44:49 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:03.838 14:44:49 -- setup/hugepages.sh@83 -- # : 512 00:04:03.838 14:44:49 -- setup/hugepages.sh@84 -- # : 1 00:04:03.838 14:44:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.838 14:44:49 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:03.838 14:44:49 -- setup/hugepages.sh@83 -- # : 0 00:04:03.838 14:44:49 -- setup/hugepages.sh@84 -- # : 0 00:04:03.838 14:44:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.838 14:44:49 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:03.838 14:44:49 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:03.838 14:44:49 -- setup/hugepages.sh@153 -- # setup output 00:04:03.838 14:44:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.838 14:44:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.214 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:05.214 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.214 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:05.214 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:05.214 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:05.214 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:05.214 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:05.214 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:05.214 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:05.214 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:05.214 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:05.214 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:05.214 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:05.214 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:05.214 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:05.214 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:05.214 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:05.214 14:44:50 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:05.214 14:44:50 -- setup/hugepages.sh@89 -- # local node 00:04:05.214 14:44:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.214 14:44:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.214 14:44:50 -- setup/hugepages.sh@92 -- # local surp 00:04:05.214 14:44:50 -- setup/hugepages.sh@93 -- # local resv 00:04:05.214 14:44:50 -- setup/hugepages.sh@94 -- # local anon 00:04:05.214 14:44:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.214 14:44:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.214 14:44:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.214 14:44:50 -- setup/common.sh@18 -- # local node= 00:04:05.214 14:44:50 -- setup/common.sh@19 -- # local var val 00:04:05.214 14:44:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.214 14:44:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.214 14:44:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.214 14:44:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.214 14:44:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.214 14:44:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.214 14:44:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26353992 kB' 'MemAvailable: 30103884 kB' 'Buffers: 2696 kB' 'Cached: 13123112 kB' 'SwapCached: 0 kB' 'Active: 10126560 kB' 'Inactive: 3494336 kB' 'Active(anon): 9560776 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498268 kB' 'Mapped: 213552 kB' 'Shmem: 9065688 kB' 'KReclaimable: 203624 kB' 'Slab: 554240 kB' 'SReclaimable: 203624 kB' 'SUnreclaim: 350616 kB' 'KernelStack: 12576 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10669572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195972 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.214 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.214 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.215 14:44:50 -- setup/common.sh@33 -- # echo 0 00:04:05.215 14:44:50 -- setup/common.sh@33 -- # return 0 00:04:05.215 14:44:50 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.215 14:44:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.215 14:44:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.215 14:44:50 -- setup/common.sh@18 -- # local node= 00:04:05.215 14:44:50 -- setup/common.sh@19 -- # local var val 00:04:05.215 14:44:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.215 14:44:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.215 14:44:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.215 14:44:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.215 14:44:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.215 14:44:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26353392 kB' 'MemAvailable: 30103284 kB' 'Buffers: 2696 kB' 'Cached: 13123112 kB' 'SwapCached: 0 kB' 'Active: 10125992 kB' 'Inactive: 3494336 kB' 'Active(anon): 9560208 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497792 kB' 'Mapped: 213560 kB' 'Shmem: 9065688 kB' 'KReclaimable: 203624 kB' 'Slab: 554248 kB' 'SReclaimable: 203624 kB' 'SUnreclaim: 350624 kB' 'KernelStack: 12528 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10669584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195940 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.215 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.215 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.216 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.216 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.217 14:44:50 -- setup/common.sh@33 -- # echo 0 00:04:05.217 14:44:50 -- setup/common.sh@33 -- # return 0 00:04:05.217 14:44:50 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.217 14:44:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.217 14:44:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.217 14:44:50 -- setup/common.sh@18 -- # local node= 00:04:05.217 14:44:50 -- setup/common.sh@19 -- # local var val 00:04:05.217 14:44:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.217 14:44:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.217 14:44:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.217 14:44:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.217 14:44:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.217 14:44:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26357752 kB' 'MemAvailable: 30107644 kB' 'Buffers: 2696 kB' 'Cached: 13123112 kB' 'SwapCached: 0 kB' 'Active: 10122196 kB' 'Inactive: 3494336 kB' 'Active(anon): 9556412 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493992 kB' 'Mapped: 212688 kB' 'Shmem: 9065688 kB' 'KReclaimable: 203624 kB' 'Slab: 554248 kB' 'SReclaimable: 203624 kB' 'SUnreclaim: 350624 kB' 'KernelStack: 12528 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10646652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195940 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.217 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.217 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.218 14:44:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.218 14:44:50 -- setup/common.sh@33 -- # echo 0 00:04:05.218 14:44:50 -- setup/common.sh@33 -- # return 0 00:04:05.218 14:44:50 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.218 14:44:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.218 nr_hugepages=1024 00:04:05.218 14:44:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.218 resv_hugepages=0 00:04:05.218 14:44:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.218 surplus_hugepages=0 00:04:05.218 14:44:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.218 anon_hugepages=0 00:04:05.218 14:44:50 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.218 14:44:50 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.218 14:44:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.218 14:44:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.218 14:44:50 -- setup/common.sh@18 -- # local node= 00:04:05.218 14:44:50 -- setup/common.sh@19 -- # local var val 00:04:05.218 14:44:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.218 14:44:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.218 14:44:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.218 14:44:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.218 14:44:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.218 14:44:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.218 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26363584 kB' 'MemAvailable: 30113464 kB' 'Buffers: 2696 kB' 'Cached: 13123140 kB' 'SwapCached: 0 kB' 'Active: 10119796 kB' 'Inactive: 3494336 kB' 'Active(anon): 9554012 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491548 kB' 'Mapped: 212420 kB' 'Shmem: 9065716 kB' 'KReclaimable: 203600 kB' 'Slab: 554072 kB' 'SReclaimable: 203600 kB' 'SUnreclaim: 350472 kB' 'KernelStack: 12448 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10646668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195828 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.219 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.219 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.220 14:44:50 -- setup/common.sh@33 -- # echo 1024 00:04:05.220 14:44:50 -- setup/common.sh@33 -- # return 0 00:04:05.220 14:44:50 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.220 14:44:50 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.220 14:44:50 -- setup/hugepages.sh@27 -- # local node 00:04:05.220 14:44:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.220 14:44:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:05.220 14:44:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.220 14:44:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:05.220 14:44:50 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.220 14:44:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.220 14:44:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.220 14:44:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.220 14:44:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.220 14:44:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.220 14:44:50 -- setup/common.sh@18 -- # local node=0 00:04:05.220 14:44:50 -- setup/common.sh@19 -- # local var val 00:04:05.220 14:44:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.220 14:44:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.220 14:44:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.220 14:44:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.220 14:44:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.220 14:44:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 19678244 kB' 'MemUsed: 4941168 kB' 'SwapCached: 0 kB' 'Active: 2904200 kB' 'Inactive: 148128 kB' 'Active(anon): 2702600 kB' 'Inactive(anon): 0 kB' 'Active(file): 201600 kB' 'Inactive(file): 148128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2840796 kB' 'Mapped: 42780 kB' 'AnonPages: 214628 kB' 'Shmem: 2491068 kB' 'KernelStack: 6568 kB' 'PageTables: 4000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85616 kB' 'Slab: 248868 kB' 'SReclaimable: 85616 kB' 'SUnreclaim: 163252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.220 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.220 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@33 -- # echo 0 00:04:05.221 14:44:50 -- setup/common.sh@33 -- # return 0 00:04:05.221 14:44:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.221 14:44:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.221 14:44:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.221 14:44:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:05.221 14:44:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.221 14:44:50 -- setup/common.sh@18 -- # local node=1 00:04:05.221 14:44:50 -- setup/common.sh@19 -- # local var val 00:04:05.221 14:44:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.221 14:44:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.221 14:44:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:05.221 14:44:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:05.221 14:44:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.221 14:44:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.221 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.221 14:44:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 6685988 kB' 'MemUsed: 12721256 kB' 'SwapCached: 0 kB' 'Active: 7215644 kB' 'Inactive: 3346208 kB' 'Active(anon): 6851460 kB' 'Inactive(anon): 0 kB' 'Active(file): 364184 kB' 'Inactive(file): 3346208 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10285068 kB' 'Mapped: 169640 kB' 'AnonPages: 276912 kB' 'Shmem: 6574676 kB' 'KernelStack: 5880 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117984 kB' 'Slab: 305204 kB' 'SReclaimable: 117984 kB' 'SUnreclaim: 187220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.221 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # continue 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.222 14:44:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.222 14:44:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.222 14:44:50 -- setup/common.sh@33 -- # echo 0 00:04:05.222 14:44:50 -- setup/common.sh@33 -- # return 0 00:04:05.222 14:44:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.222 14:44:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.222 14:44:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.222 14:44:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.222 14:44:50 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:05.222 node0=512 expecting 512 00:04:05.222 14:44:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.222 14:44:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.223 14:44:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.223 14:44:50 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:05.223 node1=512 expecting 512 00:04:05.223 14:44:50 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:05.223 00:04:05.223 real 0m1.413s 00:04:05.223 user 0m0.582s 00:04:05.223 sys 0m0.803s 00:04:05.223 14:44:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:05.223 14:44:50 -- common/autotest_common.sh@10 -- # set +x 00:04:05.223 ************************************ 00:04:05.223 END TEST even_2G_alloc 00:04:05.223 ************************************ 00:04:05.223 14:44:50 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:05.223 14:44:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.223 14:44:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.223 14:44:50 -- common/autotest_common.sh@10 -- # set +x 00:04:05.482 ************************************ 00:04:05.482 START TEST odd_alloc 00:04:05.482 ************************************ 00:04:05.482 14:44:50 -- common/autotest_common.sh@1111 -- # odd_alloc 00:04:05.482 14:44:50 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:05.482 14:44:50 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:05.482 14:44:50 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:05.482 14:44:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.482 14:44:50 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:05.482 14:44:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:05.482 14:44:50 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.482 14:44:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.482 14:44:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:05.482 14:44:50 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:05.482 14:44:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.482 14:44:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.482 14:44:50 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.482 14:44:50 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:05.482 14:44:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.482 14:44:50 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:05.482 14:44:50 -- setup/hugepages.sh@83 -- # : 513 00:04:05.482 14:44:50 -- setup/hugepages.sh@84 -- # : 1 00:04:05.482 14:44:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.482 14:44:50 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:05.482 14:44:50 -- setup/hugepages.sh@83 -- # : 0 00:04:05.482 14:44:50 -- setup/hugepages.sh@84 -- # : 0 00:04:05.482 14:44:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.482 14:44:50 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:05.482 14:44:50 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:05.482 14:44:50 -- setup/hugepages.sh@160 -- # setup output 00:04:05.482 14:44:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.482 14:44:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.416 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:06.416 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:06.416 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:06.416 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:06.416 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:06.416 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:06.416 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:06.416 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:06.416 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:06.416 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:06.416 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:06.416 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:06.416 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:06.416 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:06.416 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:06.416 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:06.416 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:06.679 14:44:52 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:06.679 14:44:52 -- setup/hugepages.sh@89 -- # local node 00:04:06.679 14:44:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.679 14:44:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.679 14:44:52 -- setup/hugepages.sh@92 -- # local surp 00:04:06.679 14:44:52 -- setup/hugepages.sh@93 -- # local resv 00:04:06.679 14:44:52 -- setup/hugepages.sh@94 -- # local anon 00:04:06.679 14:44:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.679 14:44:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.679 14:44:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.679 14:44:52 -- setup/common.sh@18 -- # local node= 00:04:06.679 14:44:52 -- setup/common.sh@19 -- # local var val 00:04:06.679 14:44:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.679 14:44:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.679 14:44:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.679 14:44:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.679 14:44:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.679 14:44:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.679 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.679 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26348904 kB' 'MemAvailable: 30098776 kB' 'Buffers: 2696 kB' 'Cached: 13123208 kB' 'SwapCached: 0 kB' 'Active: 10121276 kB' 'Inactive: 3494336 kB' 'Active(anon): 9555492 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492992 kB' 'Mapped: 212524 kB' 'Shmem: 9065784 kB' 'KReclaimable: 203584 kB' 'Slab: 553784 kB' 'SReclaimable: 203584 kB' 'SUnreclaim: 350200 kB' 'KernelStack: 12960 kB' 'PageTables: 9204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 10647864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196052 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.680 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.680 14:44:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.681 14:44:52 -- setup/common.sh@33 -- # echo 0 00:04:06.681 14:44:52 -- setup/common.sh@33 -- # return 0 00:04:06.681 14:44:52 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.681 14:44:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.681 14:44:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.681 14:44:52 -- setup/common.sh@18 -- # local node= 00:04:06.681 14:44:52 -- setup/common.sh@19 -- # local var val 00:04:06.681 14:44:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.681 14:44:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.681 14:44:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.681 14:44:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.681 14:44:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.681 14:44:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26353692 kB' 'MemAvailable: 30103564 kB' 'Buffers: 2696 kB' 'Cached: 13123216 kB' 'SwapCached: 0 kB' 'Active: 10121952 kB' 'Inactive: 3494336 kB' 'Active(anon): 9556168 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493724 kB' 'Mapped: 212588 kB' 'Shmem: 9065792 kB' 'KReclaimable: 203584 kB' 'Slab: 553832 kB' 'SReclaimable: 203584 kB' 'SUnreclaim: 350248 kB' 'KernelStack: 12896 kB' 'PageTables: 9396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 10646868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195860 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.681 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.681 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.682 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.682 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.683 14:44:52 -- setup/common.sh@33 -- # echo 0 00:04:06.683 14:44:52 -- setup/common.sh@33 -- # return 0 00:04:06.683 14:44:52 -- setup/hugepages.sh@99 -- # surp=0 00:04:06.683 14:44:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.683 14:44:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.683 14:44:52 -- setup/common.sh@18 -- # local node= 00:04:06.683 14:44:52 -- setup/common.sh@19 -- # local var val 00:04:06.683 14:44:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.683 14:44:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.683 14:44:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.683 14:44:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.683 14:44:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.683 14:44:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26353988 kB' 'MemAvailable: 30103860 kB' 'Buffers: 2696 kB' 'Cached: 13123220 kB' 'SwapCached: 0 kB' 'Active: 10120020 kB' 'Inactive: 3494336 kB' 'Active(anon): 9554236 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491708 kB' 'Mapped: 212552 kB' 'Shmem: 9065796 kB' 'KReclaimable: 203584 kB' 'Slab: 553800 kB' 'SReclaimable: 203584 kB' 'SUnreclaim: 350216 kB' 'KernelStack: 12432 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 10646884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195796 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.683 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.683 14:44:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.684 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.684 14:44:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.684 14:44:52 -- setup/common.sh@33 -- # echo 0 00:04:06.684 14:44:52 -- setup/common.sh@33 -- # return 0 00:04:06.684 14:44:52 -- setup/hugepages.sh@100 -- # resv=0 00:04:06.684 14:44:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:06.684 nr_hugepages=1025 00:04:06.685 14:44:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.685 resv_hugepages=0 00:04:06.685 14:44:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.685 surplus_hugepages=0 00:04:06.685 14:44:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.685 anon_hugepages=0 00:04:06.685 14:44:52 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:06.685 14:44:52 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:06.685 14:44:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.685 14:44:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.685 14:44:52 -- setup/common.sh@18 -- # local node= 00:04:06.685 14:44:52 -- setup/common.sh@19 -- # local var val 00:04:06.685 14:44:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.685 14:44:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.685 14:44:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.685 14:44:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.685 14:44:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.685 14:44:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.685 14:44:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26354680 kB' 'MemAvailable: 30104552 kB' 'Buffers: 2696 kB' 'Cached: 13123240 kB' 'SwapCached: 0 kB' 'Active: 10119972 kB' 'Inactive: 3494336 kB' 'Active(anon): 9554188 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491620 kB' 'Mapped: 212444 kB' 'Shmem: 9065816 kB' 'KReclaimable: 203584 kB' 'Slab: 553712 kB' 'SReclaimable: 203584 kB' 'SUnreclaim: 350128 kB' 'KernelStack: 12432 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 10646896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195796 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.685 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.685 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.686 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.686 14:44:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.686 14:44:52 -- setup/common.sh@33 -- # echo 1025 00:04:06.686 14:44:52 -- setup/common.sh@33 -- # return 0 00:04:06.686 14:44:52 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:06.686 14:44:52 -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.687 14:44:52 -- setup/hugepages.sh@27 -- # local node 00:04:06.687 14:44:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.687 14:44:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.687 14:44:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.687 14:44:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:06.687 14:44:52 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.687 14:44:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.687 14:44:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.687 14:44:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.687 14:44:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.687 14:44:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.687 14:44:52 -- setup/common.sh@18 -- # local node=0 00:04:06.687 14:44:52 -- setup/common.sh@19 -- # local var val 00:04:06.687 14:44:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.687 14:44:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.687 14:44:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.687 14:44:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.687 14:44:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.687 14:44:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 19667476 kB' 'MemUsed: 4951936 kB' 'SwapCached: 0 kB' 'Active: 2904072 kB' 'Inactive: 148128 kB' 'Active(anon): 2702472 kB' 'Inactive(anon): 0 kB' 'Active(file): 201600 kB' 'Inactive(file): 148128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2840800 kB' 'Mapped: 42780 kB' 'AnonPages: 214588 kB' 'Shmem: 2491072 kB' 'KernelStack: 6568 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85608 kB' 'Slab: 248632 kB' 'SReclaimable: 85608 kB' 'SUnreclaim: 163024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.687 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.687 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@33 -- # echo 0 00:04:06.688 14:44:52 -- setup/common.sh@33 -- # return 0 00:04:06.688 14:44:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.688 14:44:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.688 14:44:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.688 14:44:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:06.688 14:44:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.688 14:44:52 -- setup/common.sh@18 -- # local node=1 00:04:06.688 14:44:52 -- setup/common.sh@19 -- # local var val 00:04:06.688 14:44:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.688 14:44:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.688 14:44:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:06.688 14:44:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:06.688 14:44:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.688 14:44:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 6687204 kB' 'MemUsed: 12720040 kB' 'SwapCached: 0 kB' 'Active: 7215772 kB' 'Inactive: 3346208 kB' 'Active(anon): 6851588 kB' 'Inactive(anon): 0 kB' 'Active(file): 364184 kB' 'Inactive(file): 3346208 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10285164 kB' 'Mapped: 169664 kB' 'AnonPages: 276884 kB' 'Shmem: 6574772 kB' 'KernelStack: 5896 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117976 kB' 'Slab: 305080 kB' 'SReclaimable: 117976 kB' 'SUnreclaim: 187104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.688 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # continue 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 14:44:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 14:44:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.689 14:44:52 -- setup/common.sh@33 -- # echo 0 00:04:06.689 14:44:52 -- setup/common.sh@33 -- # return 0 00:04:06.689 14:44:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.689 14:44:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.689 14:44:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.689 14:44:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.689 14:44:52 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:06.689 node0=512 expecting 513 00:04:06.689 14:44:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.689 14:44:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.689 14:44:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.689 14:44:52 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:06.689 node1=513 expecting 512 00:04:06.689 14:44:52 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:06.689 00:04:06.689 real 0m1.391s 00:04:06.689 user 0m0.584s 00:04:06.689 sys 0m0.780s 00:04:06.689 14:44:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:06.689 14:44:52 -- common/autotest_common.sh@10 -- # set +x 00:04:06.689 ************************************ 00:04:06.689 END TEST odd_alloc 00:04:06.689 ************************************ 00:04:06.689 14:44:52 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:06.689 14:44:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.689 14:44:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.689 14:44:52 -- common/autotest_common.sh@10 -- # set +x 00:04:06.947 ************************************ 00:04:06.947 START TEST custom_alloc 00:04:06.947 ************************************ 00:04:06.947 14:44:52 -- common/autotest_common.sh@1111 -- # custom_alloc 00:04:06.947 14:44:52 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:06.947 14:44:52 -- setup/hugepages.sh@169 -- # local node 00:04:06.947 14:44:52 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:06.947 14:44:52 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:06.947 14:44:52 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:06.948 14:44:52 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:06.948 14:44:52 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:06.948 14:44:52 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:06.948 14:44:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.948 14:44:52 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:06.948 14:44:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:06.948 14:44:52 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:06.948 14:44:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.948 14:44:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:06.948 14:44:52 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.948 14:44:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.948 14:44:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.948 14:44:52 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.948 14:44:52 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:06.948 14:44:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.948 14:44:52 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:06.948 14:44:52 -- setup/hugepages.sh@83 -- # : 256 00:04:06.948 14:44:52 -- setup/hugepages.sh@84 -- # : 1 00:04:06.948 14:44:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.948 14:44:52 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:06.948 14:44:52 -- setup/hugepages.sh@83 -- # : 0 00:04:06.948 14:44:52 -- setup/hugepages.sh@84 -- # : 0 00:04:06.948 14:44:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.948 14:44:52 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:06.948 14:44:52 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:06.948 14:44:52 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:06.948 14:44:52 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:06.948 14:44:52 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:06.948 14:44:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.948 14:44:52 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:06.948 14:44:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:06.948 14:44:52 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:06.948 14:44:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.948 14:44:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.948 14:44:52 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.948 14:44:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.948 14:44:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.948 14:44:52 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.948 14:44:52 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:06.948 14:44:52 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:06.948 14:44:52 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:06.948 14:44:52 -- setup/hugepages.sh@78 -- # return 0 00:04:06.948 14:44:52 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:06.948 14:44:52 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:06.948 14:44:52 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:06.948 14:44:52 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:06.948 14:44:52 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:06.948 14:44:52 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:06.948 14:44:52 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:06.948 14:44:52 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:06.948 14:44:52 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:06.948 14:44:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.948 14:44:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.948 14:44:52 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.948 14:44:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.948 14:44:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.948 14:44:52 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.948 14:44:52 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:06.948 14:44:52 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:06.948 14:44:52 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:06.948 14:44:52 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:06.948 14:44:52 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:06.948 14:44:52 -- setup/hugepages.sh@78 -- # return 0 00:04:06.948 14:44:52 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:06.948 14:44:52 -- setup/hugepages.sh@187 -- # setup output 00:04:06.948 14:44:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.948 14:44:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:08.367 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:08.368 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:08.368 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:08.368 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:08.368 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:08.368 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:08.368 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:08.368 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:08.368 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:08.368 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:08.368 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:08.368 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:08.368 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:08.368 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:08.368 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:08.368 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:08.368 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:08.368 14:44:53 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:08.368 14:44:53 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:08.368 14:44:53 -- setup/hugepages.sh@89 -- # local node 00:04:08.368 14:44:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.368 14:44:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.368 14:44:53 -- setup/hugepages.sh@92 -- # local surp 00:04:08.368 14:44:53 -- setup/hugepages.sh@93 -- # local resv 00:04:08.368 14:44:53 -- setup/hugepages.sh@94 -- # local anon 00:04:08.368 14:44:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.368 14:44:53 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.368 14:44:53 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.368 14:44:53 -- setup/common.sh@18 -- # local node= 00:04:08.368 14:44:53 -- setup/common.sh@19 -- # local var val 00:04:08.368 14:44:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.368 14:44:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.368 14:44:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.368 14:44:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.368 14:44:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.368 14:44:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 25310888 kB' 'MemAvailable: 29060760 kB' 'Buffers: 2696 kB' 'Cached: 13123308 kB' 'SwapCached: 0 kB' 'Active: 10120396 kB' 'Inactive: 3494336 kB' 'Active(anon): 9554612 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492000 kB' 'Mapped: 213020 kB' 'Shmem: 9065884 kB' 'KReclaimable: 203584 kB' 'Slab: 553668 kB' 'SReclaimable: 203584 kB' 'SUnreclaim: 350084 kB' 'KernelStack: 12464 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 10647080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195828 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.368 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.368 14:44:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.369 14:44:53 -- setup/common.sh@33 -- # echo 0 00:04:08.369 14:44:53 -- setup/common.sh@33 -- # return 0 00:04:08.369 14:44:53 -- setup/hugepages.sh@97 -- # anon=0 00:04:08.369 14:44:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.369 14:44:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.369 14:44:53 -- setup/common.sh@18 -- # local node= 00:04:08.369 14:44:53 -- setup/common.sh@19 -- # local var val 00:04:08.369 14:44:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.369 14:44:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.369 14:44:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.369 14:44:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.369 14:44:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.369 14:44:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 25313208 kB' 'MemAvailable: 29063080 kB' 'Buffers: 2696 kB' 'Cached: 13123308 kB' 'SwapCached: 0 kB' 'Active: 10121032 kB' 'Inactive: 3494336 kB' 'Active(anon): 9555248 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492648 kB' 'Mapped: 212540 kB' 'Shmem: 9065884 kB' 'KReclaimable: 203584 kB' 'Slab: 553656 kB' 'SReclaimable: 203584 kB' 'SUnreclaim: 350072 kB' 'KernelStack: 12480 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 10647092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195780 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.369 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.369 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.370 14:44:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.370 14:44:53 -- setup/common.sh@33 -- # echo 0 00:04:08.370 14:44:53 -- setup/common.sh@33 -- # return 0 00:04:08.370 14:44:53 -- setup/hugepages.sh@99 -- # surp=0 00:04:08.370 14:44:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.370 14:44:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.370 14:44:53 -- setup/common.sh@18 -- # local node= 00:04:08.370 14:44:53 -- setup/common.sh@19 -- # local var val 00:04:08.370 14:44:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.370 14:44:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.370 14:44:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.370 14:44:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.370 14:44:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.370 14:44:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.370 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 25313208 kB' 'MemAvailable: 29063080 kB' 'Buffers: 2696 kB' 'Cached: 13123308 kB' 'SwapCached: 0 kB' 'Active: 10119860 kB' 'Inactive: 3494336 kB' 'Active(anon): 9554076 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491468 kB' 'Mapped: 212492 kB' 'Shmem: 9065884 kB' 'KReclaimable: 203584 kB' 'Slab: 553656 kB' 'SReclaimable: 203584 kB' 'SUnreclaim: 350072 kB' 'KernelStack: 12480 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 10647108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195780 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.371 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.371 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.372 14:44:53 -- setup/common.sh@33 -- # echo 0 00:04:08.372 14:44:53 -- setup/common.sh@33 -- # return 0 00:04:08.372 14:44:53 -- setup/hugepages.sh@100 -- # resv=0 00:04:08.372 14:44:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:08.372 nr_hugepages=1536 00:04:08.372 14:44:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.372 resv_hugepages=0 00:04:08.372 14:44:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.372 surplus_hugepages=0 00:04:08.372 14:44:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.372 anon_hugepages=0 00:04:08.372 14:44:53 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:08.372 14:44:53 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:08.372 14:44:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.372 14:44:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.372 14:44:53 -- setup/common.sh@18 -- # local node= 00:04:08.372 14:44:53 -- setup/common.sh@19 -- # local var val 00:04:08.372 14:44:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.372 14:44:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.372 14:44:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.372 14:44:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.372 14:44:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.372 14:44:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 25313284 kB' 'MemAvailable: 29063156 kB' 'Buffers: 2696 kB' 'Cached: 13123336 kB' 'SwapCached: 0 kB' 'Active: 10120124 kB' 'Inactive: 3494336 kB' 'Active(anon): 9554340 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491704 kB' 'Mapped: 212492 kB' 'Shmem: 9065912 kB' 'KReclaimable: 203584 kB' 'Slab: 553672 kB' 'SReclaimable: 203584 kB' 'SUnreclaim: 350088 kB' 'KernelStack: 12464 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 10647120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195780 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.372 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.372 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.373 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.373 14:44:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.373 14:44:53 -- setup/common.sh@33 -- # echo 1536 00:04:08.373 14:44:53 -- setup/common.sh@33 -- # return 0 00:04:08.373 14:44:53 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:08.373 14:44:53 -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.373 14:44:53 -- setup/hugepages.sh@27 -- # local node 00:04:08.373 14:44:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.373 14:44:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:08.373 14:44:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.373 14:44:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:08.373 14:44:53 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:08.373 14:44:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.373 14:44:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.373 14:44:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.373 14:44:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.373 14:44:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.373 14:44:53 -- setup/common.sh@18 -- # local node=0 00:04:08.373 14:44:53 -- setup/common.sh@19 -- # local var val 00:04:08.373 14:44:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.373 14:44:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.374 14:44:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.374 14:44:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.374 14:44:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.374 14:44:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 19670080 kB' 'MemUsed: 4949332 kB' 'SwapCached: 0 kB' 'Active: 2904188 kB' 'Inactive: 148128 kB' 'Active(anon): 2702588 kB' 'Inactive(anon): 0 kB' 'Active(file): 201600 kB' 'Inactive(file): 148128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2840800 kB' 'Mapped: 42780 kB' 'AnonPages: 214684 kB' 'Shmem: 2491072 kB' 'KernelStack: 6536 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85608 kB' 'Slab: 248600 kB' 'SReclaimable: 85608 kB' 'SUnreclaim: 162992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.374 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.374 14:44:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.374 14:44:53 -- setup/common.sh@33 -- # echo 0 00:04:08.374 14:44:53 -- setup/common.sh@33 -- # return 0 00:04:08.374 14:44:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.374 14:44:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.374 14:44:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.374 14:44:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:08.374 14:44:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.374 14:44:53 -- setup/common.sh@18 -- # local node=1 00:04:08.374 14:44:53 -- setup/common.sh@19 -- # local var val 00:04:08.374 14:44:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.374 14:44:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.374 14:44:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:08.374 14:44:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:08.374 14:44:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.374 14:44:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 5643204 kB' 'MemUsed: 13764040 kB' 'SwapCached: 0 kB' 'Active: 7216220 kB' 'Inactive: 3346208 kB' 'Active(anon): 6852036 kB' 'Inactive(anon): 0 kB' 'Active(file): 364184 kB' 'Inactive(file): 3346208 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10285260 kB' 'Mapped: 169712 kB' 'AnonPages: 277276 kB' 'Shmem: 6574868 kB' 'KernelStack: 5912 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117976 kB' 'Slab: 305072 kB' 'SReclaimable: 117976 kB' 'SUnreclaim: 187096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.375 14:44:53 -- setup/common.sh@32 -- # continue 00:04:08.375 14:44:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.376 14:44:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.376 14:44:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.376 14:44:53 -- setup/common.sh@33 -- # echo 0 00:04:08.376 14:44:53 -- setup/common.sh@33 -- # return 0 00:04:08.376 14:44:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.376 14:44:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.376 14:44:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.376 14:44:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.376 14:44:53 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:08.376 node0=512 expecting 512 00:04:08.376 14:44:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.376 14:44:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.376 14:44:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.376 14:44:53 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:08.376 node1=1024 expecting 1024 00:04:08.376 14:44:53 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:08.376 00:04:08.376 real 0m1.459s 00:04:08.376 user 0m0.619s 00:04:08.376 sys 0m0.816s 00:04:08.376 14:44:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:08.376 14:44:53 -- common/autotest_common.sh@10 -- # set +x 00:04:08.376 ************************************ 00:04:08.376 END TEST custom_alloc 00:04:08.376 ************************************ 00:04:08.376 14:44:53 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:08.376 14:44:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.376 14:44:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.376 14:44:53 -- common/autotest_common.sh@10 -- # set +x 00:04:08.376 ************************************ 00:04:08.376 START TEST no_shrink_alloc 00:04:08.376 ************************************ 00:04:08.376 14:44:54 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:04:08.376 14:44:54 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:08.376 14:44:54 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:08.376 14:44:54 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:08.376 14:44:54 -- setup/hugepages.sh@51 -- # shift 00:04:08.376 14:44:54 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:08.376 14:44:54 -- setup/hugepages.sh@52 -- # local node_ids 00:04:08.376 14:44:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:08.376 14:44:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:08.376 14:44:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:08.376 14:44:54 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:08.376 14:44:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:08.376 14:44:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:08.376 14:44:54 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:08.376 14:44:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:08.376 14:44:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:08.376 14:44:54 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:08.376 14:44:54 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:08.376 14:44:54 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:08.376 14:44:54 -- setup/hugepages.sh@73 -- # return 0 00:04:08.376 14:44:54 -- setup/hugepages.sh@198 -- # setup output 00:04:08.376 14:44:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.376 14:44:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.754 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:09.754 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:09.754 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:09.754 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:09.754 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:09.754 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:09.754 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:09.754 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:09.754 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:09.754 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:09.754 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:09.754 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:09.754 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:09.754 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:09.754 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:09.754 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:09.754 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:09.754 14:44:55 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:09.754 14:44:55 -- setup/hugepages.sh@89 -- # local node 00:04:09.754 14:44:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.754 14:44:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.754 14:44:55 -- setup/hugepages.sh@92 -- # local surp 00:04:09.754 14:44:55 -- setup/hugepages.sh@93 -- # local resv 00:04:09.754 14:44:55 -- setup/hugepages.sh@94 -- # local anon 00:04:09.754 14:44:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.754 14:44:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.754 14:44:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.754 14:44:55 -- setup/common.sh@18 -- # local node= 00:04:09.754 14:44:55 -- setup/common.sh@19 -- # local var val 00:04:09.754 14:44:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.754 14:44:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.754 14:44:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.754 14:44:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.754 14:44:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.754 14:44:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.754 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.754 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.754 14:44:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26358796 kB' 'MemAvailable: 30108668 kB' 'Buffers: 2696 kB' 'Cached: 13123408 kB' 'SwapCached: 0 kB' 'Active: 10120816 kB' 'Inactive: 3494336 kB' 'Active(anon): 9555032 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492276 kB' 'Mapped: 212568 kB' 'Shmem: 9065984 kB' 'KReclaimable: 203584 kB' 'Slab: 553576 kB' 'SReclaimable: 203584 kB' 'SUnreclaim: 349992 kB' 'KernelStack: 12496 kB' 'PageTables: 8028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10647140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195860 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:09.754 14:44:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.754 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.754 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.754 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.754 14:44:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.754 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.754 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.754 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.754 14:44:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.754 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.754 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.754 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.754 14:44:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.754 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.754 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.754 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.754 14:44:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.754 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.754 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.754 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.754 14:44:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.754 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.755 14:44:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.755 14:44:55 -- setup/common.sh@33 -- # echo 0 00:04:09.755 14:44:55 -- setup/common.sh@33 -- # return 0 00:04:09.755 14:44:55 -- setup/hugepages.sh@97 -- # anon=0 00:04:09.755 14:44:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.755 14:44:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.755 14:44:55 -- setup/common.sh@18 -- # local node= 00:04:09.755 14:44:55 -- setup/common.sh@19 -- # local var val 00:04:09.755 14:44:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.755 14:44:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.755 14:44:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.755 14:44:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.755 14:44:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.755 14:44:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.755 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26358936 kB' 'MemAvailable: 30108808 kB' 'Buffers: 2696 kB' 'Cached: 13123408 kB' 'SwapCached: 0 kB' 'Active: 10121008 kB' 'Inactive: 3494336 kB' 'Active(anon): 9555224 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492480 kB' 'Mapped: 212568 kB' 'Shmem: 9065984 kB' 'KReclaimable: 203584 kB' 'Slab: 553564 kB' 'SReclaimable: 203584 kB' 'SUnreclaim: 349980 kB' 'KernelStack: 12464 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10647152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195844 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.756 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.756 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.757 14:44:55 -- setup/common.sh@33 -- # echo 0 00:04:09.757 14:44:55 -- setup/common.sh@33 -- # return 0 00:04:09.757 14:44:55 -- setup/hugepages.sh@99 -- # surp=0 00:04:09.757 14:44:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.757 14:44:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.757 14:44:55 -- setup/common.sh@18 -- # local node= 00:04:09.757 14:44:55 -- setup/common.sh@19 -- # local var val 00:04:09.757 14:44:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.757 14:44:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.757 14:44:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.757 14:44:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.757 14:44:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.757 14:44:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26358684 kB' 'MemAvailable: 30108556 kB' 'Buffers: 2696 kB' 'Cached: 13123408 kB' 'SwapCached: 0 kB' 'Active: 10120604 kB' 'Inactive: 3494336 kB' 'Active(anon): 9554820 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492068 kB' 'Mapped: 212528 kB' 'Shmem: 9065984 kB' 'KReclaimable: 203584 kB' 'Slab: 553564 kB' 'SReclaimable: 203584 kB' 'SUnreclaim: 349980 kB' 'KernelStack: 12528 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10647168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195844 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.757 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.757 14:44:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.758 14:44:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.758 14:44:55 -- setup/common.sh@33 -- # echo 0 00:04:09.758 14:44:55 -- setup/common.sh@33 -- # return 0 00:04:09.758 14:44:55 -- setup/hugepages.sh@100 -- # resv=0 00:04:09.758 14:44:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.758 nr_hugepages=1024 00:04:09.758 14:44:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.758 resv_hugepages=0 00:04:09.758 14:44:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.758 surplus_hugepages=0 00:04:09.758 14:44:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.758 anon_hugepages=0 00:04:09.758 14:44:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.758 14:44:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.758 14:44:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.758 14:44:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.758 14:44:55 -- setup/common.sh@18 -- # local node= 00:04:09.758 14:44:55 -- setup/common.sh@19 -- # local var val 00:04:09.758 14:44:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.758 14:44:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.758 14:44:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.758 14:44:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.758 14:44:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.758 14:44:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.758 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 14:44:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26363732 kB' 'MemAvailable: 30113604 kB' 'Buffers: 2696 kB' 'Cached: 13123436 kB' 'SwapCached: 0 kB' 'Active: 10121048 kB' 'Inactive: 3494336 kB' 'Active(anon): 9555264 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492424 kB' 'Mapped: 212528 kB' 'Shmem: 9066012 kB' 'KReclaimable: 203584 kB' 'Slab: 553580 kB' 'SReclaimable: 203584 kB' 'SUnreclaim: 349996 kB' 'KernelStack: 12592 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10649408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195956 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:09.759 14:44:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.759 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.759 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 14:44:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.759 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.759 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 14:44:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.759 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.759 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 14:44:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.759 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.759 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 14:44:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.759 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.759 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 14:44:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.759 14:44:55 -- setup/common.sh@32 -- # continue 00:04:09.759 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.759 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.759 14:44:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.759 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.020 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.020 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.021 14:44:55 -- setup/common.sh@33 -- # echo 1024 00:04:10.021 14:44:55 -- setup/common.sh@33 -- # return 0 00:04:10.021 14:44:55 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.021 14:44:55 -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.021 14:44:55 -- setup/hugepages.sh@27 -- # local node 00:04:10.021 14:44:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.021 14:44:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:10.021 14:44:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.021 14:44:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:10.021 14:44:55 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.021 14:44:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.021 14:44:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.021 14:44:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.021 14:44:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.021 14:44:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.021 14:44:55 -- setup/common.sh@18 -- # local node=0 00:04:10.021 14:44:55 -- setup/common.sh@19 -- # local var val 00:04:10.021 14:44:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.021 14:44:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.021 14:44:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.021 14:44:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.021 14:44:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.021 14:44:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 18611488 kB' 'MemUsed: 6007924 kB' 'SwapCached: 0 kB' 'Active: 2906636 kB' 'Inactive: 148128 kB' 'Active(anon): 2705036 kB' 'Inactive(anon): 0 kB' 'Active(file): 201600 kB' 'Inactive(file): 148128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2840804 kB' 'Mapped: 42780 kB' 'AnonPages: 217072 kB' 'Shmem: 2491076 kB' 'KernelStack: 6824 kB' 'PageTables: 5148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85608 kB' 'Slab: 248540 kB' 'SReclaimable: 85608 kB' 'SUnreclaim: 162932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.021 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.021 14:44:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # continue 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.022 14:44:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.022 14:44:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.022 14:44:55 -- setup/common.sh@33 -- # echo 0 00:04:10.022 14:44:55 -- setup/common.sh@33 -- # return 0 00:04:10.022 14:44:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.022 14:44:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.022 14:44:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.022 14:44:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.022 14:44:55 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:10.022 node0=1024 expecting 1024 00:04:10.022 14:44:55 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:10.022 14:44:55 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:10.022 14:44:55 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:10.022 14:44:55 -- setup/hugepages.sh@202 -- # setup output 00:04:10.022 14:44:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.022 14:44:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.956 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:10.956 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:10.956 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:10.956 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:10.956 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:10.956 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:10.956 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:10.956 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:10.956 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:10.956 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:10.956 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:10.956 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:10.956 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:10.956 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:10.956 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:10.956 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:10.956 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:11.218 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:11.218 14:44:56 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:11.218 14:44:56 -- setup/hugepages.sh@89 -- # local node 00:04:11.218 14:44:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.218 14:44:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.218 14:44:56 -- setup/hugepages.sh@92 -- # local surp 00:04:11.218 14:44:56 -- setup/hugepages.sh@93 -- # local resv 00:04:11.218 14:44:56 -- setup/hugepages.sh@94 -- # local anon 00:04:11.218 14:44:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.218 14:44:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.218 14:44:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.218 14:44:56 -- setup/common.sh@18 -- # local node= 00:04:11.218 14:44:56 -- setup/common.sh@19 -- # local var val 00:04:11.218 14:44:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.218 14:44:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.218 14:44:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.218 14:44:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.218 14:44:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.218 14:44:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.218 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.218 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.218 14:44:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26348544 kB' 'MemAvailable: 30098416 kB' 'Buffers: 2696 kB' 'Cached: 13123488 kB' 'SwapCached: 0 kB' 'Active: 10121752 kB' 'Inactive: 3494336 kB' 'Active(anon): 9555968 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493152 kB' 'Mapped: 212648 kB' 'Shmem: 9066064 kB' 'KReclaimable: 203584 kB' 'Slab: 553968 kB' 'SReclaimable: 203584 kB' 'SUnreclaim: 350384 kB' 'KernelStack: 12528 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10647388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195940 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:11.218 14:44:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.218 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.218 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.218 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.219 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.219 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.220 14:44:56 -- setup/common.sh@33 -- # echo 0 00:04:11.220 14:44:56 -- setup/common.sh@33 -- # return 0 00:04:11.220 14:44:56 -- setup/hugepages.sh@97 -- # anon=0 00:04:11.220 14:44:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.220 14:44:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.220 14:44:56 -- setup/common.sh@18 -- # local node= 00:04:11.220 14:44:56 -- setup/common.sh@19 -- # local var val 00:04:11.220 14:44:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.220 14:44:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.220 14:44:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.220 14:44:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.220 14:44:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.220 14:44:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26349440 kB' 'MemAvailable: 30099312 kB' 'Buffers: 2696 kB' 'Cached: 13123492 kB' 'SwapCached: 0 kB' 'Active: 10121608 kB' 'Inactive: 3494336 kB' 'Active(anon): 9555824 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493060 kB' 'Mapped: 212612 kB' 'Shmem: 9066068 kB' 'KReclaimable: 203584 kB' 'Slab: 553960 kB' 'SReclaimable: 203584 kB' 'SUnreclaim: 350376 kB' 'KernelStack: 12496 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10647400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195924 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.220 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.220 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.221 14:44:56 -- setup/common.sh@33 -- # echo 0 00:04:11.221 14:44:56 -- setup/common.sh@33 -- # return 0 00:04:11.221 14:44:56 -- setup/hugepages.sh@99 -- # surp=0 00:04:11.221 14:44:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.221 14:44:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.221 14:44:56 -- setup/common.sh@18 -- # local node= 00:04:11.221 14:44:56 -- setup/common.sh@19 -- # local var val 00:04:11.221 14:44:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.221 14:44:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.221 14:44:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.221 14:44:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.221 14:44:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.221 14:44:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26349328 kB' 'MemAvailable: 30099200 kB' 'Buffers: 2696 kB' 'Cached: 13123504 kB' 'SwapCached: 0 kB' 'Active: 10121292 kB' 'Inactive: 3494336 kB' 'Active(anon): 9555508 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492704 kB' 'Mapped: 212532 kB' 'Shmem: 9066080 kB' 'KReclaimable: 203584 kB' 'Slab: 553940 kB' 'SReclaimable: 203584 kB' 'SUnreclaim: 350356 kB' 'KernelStack: 12544 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10647416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195924 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.221 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.221 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.222 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.222 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.223 14:44:56 -- setup/common.sh@33 -- # echo 0 00:04:11.223 14:44:56 -- setup/common.sh@33 -- # return 0 00:04:11.223 14:44:56 -- setup/hugepages.sh@100 -- # resv=0 00:04:11.223 14:44:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:11.223 nr_hugepages=1024 00:04:11.223 14:44:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.223 resv_hugepages=0 00:04:11.223 14:44:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.223 surplus_hugepages=0 00:04:11.223 14:44:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.223 anon_hugepages=0 00:04:11.223 14:44:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.223 14:44:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:11.223 14:44:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.223 14:44:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.223 14:44:56 -- setup/common.sh@18 -- # local node= 00:04:11.223 14:44:56 -- setup/common.sh@19 -- # local var val 00:04:11.223 14:44:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.223 14:44:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.223 14:44:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.223 14:44:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.223 14:44:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.223 14:44:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 26355672 kB' 'MemAvailable: 30105544 kB' 'Buffers: 2696 kB' 'Cached: 13123516 kB' 'SwapCached: 0 kB' 'Active: 10121312 kB' 'Inactive: 3494336 kB' 'Active(anon): 9555528 kB' 'Inactive(anon): 0 kB' 'Active(file): 565784 kB' 'Inactive(file): 3494336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492700 kB' 'Mapped: 212532 kB' 'Shmem: 9066092 kB' 'KReclaimable: 203584 kB' 'Slab: 553940 kB' 'SReclaimable: 203584 kB' 'SUnreclaim: 350356 kB' 'KernelStack: 12544 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 10647428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195924 kB' 'VmallocChunk: 0 kB' 'Percpu: 32832 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1613404 kB' 'DirectMap2M: 15083520 kB' 'DirectMap1G: 35651584 kB' 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.223 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.223 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.224 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.224 14:44:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.224 14:44:56 -- setup/common.sh@33 -- # echo 1024 00:04:11.224 14:44:56 -- setup/common.sh@33 -- # return 0 00:04:11.224 14:44:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.224 14:44:56 -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.224 14:44:56 -- setup/hugepages.sh@27 -- # local node 00:04:11.224 14:44:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.224 14:44:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:11.224 14:44:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.224 14:44:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:11.224 14:44:56 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:11.224 14:44:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.224 14:44:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.224 14:44:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.224 14:44:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.225 14:44:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.225 14:44:56 -- setup/common.sh@18 -- # local node=0 00:04:11.225 14:44:56 -- setup/common.sh@19 -- # local var val 00:04:11.225 14:44:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.225 14:44:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.225 14:44:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.225 14:44:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.225 14:44:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.225 14:44:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 18616292 kB' 'MemUsed: 6003120 kB' 'SwapCached: 0 kB' 'Active: 2905536 kB' 'Inactive: 148128 kB' 'Active(anon): 2703936 kB' 'Inactive(anon): 0 kB' 'Active(file): 201600 kB' 'Inactive(file): 148128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2840812 kB' 'Mapped: 42780 kB' 'AnonPages: 216004 kB' 'Shmem: 2491084 kB' 'KernelStack: 6584 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85608 kB' 'Slab: 248628 kB' 'SReclaimable: 85608 kB' 'SUnreclaim: 163020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.225 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.225 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.226 14:44:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.226 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.226 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.226 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.226 14:44:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.226 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.226 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.226 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.226 14:44:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.226 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.226 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.226 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.226 14:44:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.226 14:44:56 -- setup/common.sh@32 -- # continue 00:04:11.226 14:44:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.226 14:44:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.226 14:44:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.226 14:44:56 -- setup/common.sh@33 -- # echo 0 00:04:11.226 14:44:56 -- setup/common.sh@33 -- # return 0 00:04:11.226 14:44:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.226 14:44:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.226 14:44:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.226 14:44:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.226 14:44:56 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:11.226 node0=1024 expecting 1024 00:04:11.226 14:44:56 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:11.226 00:04:11.226 real 0m2.824s 00:04:11.226 user 0m1.163s 00:04:11.226 sys 0m1.608s 00:04:11.226 14:44:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:11.226 14:44:56 -- common/autotest_common.sh@10 -- # set +x 00:04:11.226 ************************************ 00:04:11.226 END TEST no_shrink_alloc 00:04:11.226 ************************************ 00:04:11.226 14:44:56 -- setup/hugepages.sh@217 -- # clear_hp 00:04:11.226 14:44:56 -- setup/hugepages.sh@37 -- # local node hp 00:04:11.226 14:44:56 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:11.226 14:44:56 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:11.226 14:44:56 -- setup/hugepages.sh@41 -- # echo 0 00:04:11.226 14:44:56 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:11.226 14:44:56 -- setup/hugepages.sh@41 -- # echo 0 00:04:11.226 14:44:56 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:11.226 14:44:56 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:11.226 14:44:56 -- setup/hugepages.sh@41 -- # echo 0 00:04:11.226 14:44:56 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:11.226 14:44:56 -- setup/hugepages.sh@41 -- # echo 0 00:04:11.226 14:44:56 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:11.226 14:44:56 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:11.226 00:04:11.226 real 0m11.789s 00:04:11.226 user 0m4.509s 00:04:11.226 sys 0m6.038s 00:04:11.226 14:44:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:11.226 14:44:56 -- common/autotest_common.sh@10 -- # set +x 00:04:11.226 ************************************ 00:04:11.226 END TEST hugepages 00:04:11.226 ************************************ 00:04:11.226 14:44:56 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:11.226 14:44:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.226 14:44:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.226 14:44:56 -- common/autotest_common.sh@10 -- # set +x 00:04:11.485 ************************************ 00:04:11.485 START TEST driver 00:04:11.485 ************************************ 00:04:11.485 14:44:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:11.485 * Looking for test storage... 00:04:11.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:11.485 14:44:57 -- setup/driver.sh@68 -- # setup reset 00:04:11.485 14:44:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:11.485 14:44:57 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.017 14:44:59 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:14.017 14:44:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.017 14:44:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.017 14:44:59 -- common/autotest_common.sh@10 -- # set +x 00:04:14.017 ************************************ 00:04:14.017 START TEST guess_driver 00:04:14.017 ************************************ 00:04:14.017 14:44:59 -- common/autotest_common.sh@1111 -- # guess_driver 00:04:14.017 14:44:59 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:14.017 14:44:59 -- setup/driver.sh@47 -- # local fail=0 00:04:14.017 14:44:59 -- setup/driver.sh@49 -- # pick_driver 00:04:14.017 14:44:59 -- setup/driver.sh@36 -- # vfio 00:04:14.017 14:44:59 -- setup/driver.sh@21 -- # local iommu_grups 00:04:14.017 14:44:59 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:14.017 14:44:59 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:14.017 14:44:59 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:14.017 14:44:59 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:14.017 14:44:59 -- setup/driver.sh@29 -- # (( 143 > 0 )) 00:04:14.017 14:44:59 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:14.017 14:44:59 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:14.017 14:44:59 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:14.017 14:44:59 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:14.017 14:44:59 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:14.017 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:14.017 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:14.017 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:14.017 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:14.017 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:14.017 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:14.017 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:14.017 14:44:59 -- setup/driver.sh@30 -- # return 0 00:04:14.017 14:44:59 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:14.017 14:44:59 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:14.017 14:44:59 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:14.017 14:44:59 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:14.017 Looking for driver=vfio-pci 00:04:14.017 14:44:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.017 14:44:59 -- setup/driver.sh@45 -- # setup output config 00:04:14.017 14:44:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.017 14:44:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:14.951 14:45:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.951 14:45:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.951 14:45:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.951 14:45:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.951 14:45:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.951 14:45:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.951 14:45:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.951 14:45:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.951 14:45:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.951 14:45:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.951 14:45:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.951 14:45:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.951 14:45:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.951 14:45:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.951 14:45:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.951 14:45:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.951 14:45:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.951 14:45:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.951 14:45:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.951 14:45:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.951 14:45:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.951 14:45:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.951 14:45:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.951 14:45:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.951 14:45:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.951 14:45:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.951 14:45:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.951 14:45:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.951 14:45:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.951 14:45:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.951 14:45:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.951 14:45:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.951 14:45:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.951 14:45:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.951 14:45:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.951 14:45:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.951 14:45:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.951 14:45:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.951 14:45:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.951 14:45:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.951 14:45:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.951 14:45:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.951 14:45:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.951 14:45:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.951 14:45:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.951 14:45:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.952 14:45:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.952 14:45:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.327 14:45:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.327 14:45:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.327 14:45:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.327 14:45:01 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:16.327 14:45:01 -- setup/driver.sh@65 -- # setup reset 00:04:16.327 14:45:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.327 14:45:01 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:18.851 00:04:18.851 real 0m4.620s 00:04:18.851 user 0m1.010s 00:04:18.851 sys 0m1.708s 00:04:18.851 14:45:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:18.851 14:45:04 -- common/autotest_common.sh@10 -- # set +x 00:04:18.851 ************************************ 00:04:18.851 END TEST guess_driver 00:04:18.851 ************************************ 00:04:18.851 00:04:18.851 real 0m7.098s 00:04:18.851 user 0m1.614s 00:04:18.851 sys 0m2.713s 00:04:18.851 14:45:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:18.851 14:45:04 -- common/autotest_common.sh@10 -- # set +x 00:04:18.851 ************************************ 00:04:18.851 END TEST driver 00:04:18.851 ************************************ 00:04:18.851 14:45:04 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:18.851 14:45:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.851 14:45:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.851 14:45:04 -- common/autotest_common.sh@10 -- # set +x 00:04:18.851 ************************************ 00:04:18.851 START TEST devices 00:04:18.851 ************************************ 00:04:18.851 14:45:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:18.851 * Looking for test storage... 00:04:18.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:18.851 14:45:04 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:18.851 14:45:04 -- setup/devices.sh@192 -- # setup reset 00:04:18.851 14:45:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.851 14:45:04 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.223 14:45:05 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:20.223 14:45:05 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:20.223 14:45:05 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:20.223 14:45:05 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:20.223 14:45:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:20.223 14:45:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:20.223 14:45:05 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:20.223 14:45:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:20.223 14:45:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:20.223 14:45:05 -- setup/devices.sh@196 -- # blocks=() 00:04:20.223 14:45:05 -- setup/devices.sh@196 -- # declare -a blocks 00:04:20.223 14:45:05 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:20.223 14:45:05 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:20.223 14:45:05 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:20.223 14:45:05 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:20.223 14:45:05 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:20.223 14:45:05 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:20.223 14:45:05 -- setup/devices.sh@202 -- # pci=0000:82:00.0 00:04:20.223 14:45:05 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:04:20.223 14:45:05 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:20.223 14:45:05 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:20.223 14:45:05 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:20.223 No valid GPT data, bailing 00:04:20.223 14:45:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:20.223 14:45:05 -- scripts/common.sh@391 -- # pt= 00:04:20.223 14:45:05 -- scripts/common.sh@392 -- # return 1 00:04:20.223 14:45:05 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:20.223 14:45:05 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:20.223 14:45:05 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:20.223 14:45:05 -- setup/common.sh@80 -- # echo 1000204886016 00:04:20.223 14:45:05 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:20.223 14:45:05 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:20.223 14:45:05 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:82:00.0 00:04:20.223 14:45:05 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:20.223 14:45:05 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:20.223 14:45:05 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:20.223 14:45:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.223 14:45:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.223 14:45:05 -- common/autotest_common.sh@10 -- # set +x 00:04:20.223 ************************************ 00:04:20.223 START TEST nvme_mount 00:04:20.223 ************************************ 00:04:20.223 14:45:05 -- common/autotest_common.sh@1111 -- # nvme_mount 00:04:20.223 14:45:05 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:20.223 14:45:05 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:20.223 14:45:05 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.224 14:45:05 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.224 14:45:05 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:20.224 14:45:05 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:20.224 14:45:05 -- setup/common.sh@40 -- # local part_no=1 00:04:20.224 14:45:05 -- setup/common.sh@41 -- # local size=1073741824 00:04:20.224 14:45:05 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:20.224 14:45:05 -- setup/common.sh@44 -- # parts=() 00:04:20.224 14:45:05 -- setup/common.sh@44 -- # local parts 00:04:20.224 14:45:05 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:20.224 14:45:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:20.224 14:45:05 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:20.224 14:45:05 -- setup/common.sh@46 -- # (( part++ )) 00:04:20.224 14:45:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:20.224 14:45:05 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:20.224 14:45:05 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:20.224 14:45:05 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:21.597 Creating new GPT entries in memory. 00:04:21.597 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:21.597 other utilities. 00:04:21.597 14:45:06 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:21.597 14:45:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:21.597 14:45:06 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:21.597 14:45:06 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:21.597 14:45:06 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:22.530 Creating new GPT entries in memory. 00:04:22.530 The operation has completed successfully. 00:04:22.530 14:45:07 -- setup/common.sh@57 -- # (( part++ )) 00:04:22.530 14:45:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.530 14:45:07 -- setup/common.sh@62 -- # wait 3637467 00:04:22.530 14:45:07 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.530 14:45:07 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:22.530 14:45:07 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.530 14:45:07 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:22.530 14:45:07 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:22.530 14:45:07 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.530 14:45:07 -- setup/devices.sh@105 -- # verify 0000:82:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.530 14:45:07 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:22.530 14:45:07 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:22.530 14:45:07 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.530 14:45:07 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.530 14:45:07 -- setup/devices.sh@53 -- # local found=0 00:04:22.530 14:45:07 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:22.530 14:45:07 -- setup/devices.sh@56 -- # : 00:04:22.530 14:45:07 -- setup/devices.sh@59 -- # local pci status 00:04:22.530 14:45:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.530 14:45:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:22.530 14:45:07 -- setup/devices.sh@47 -- # setup output config 00:04:22.530 14:45:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.530 14:45:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:23.464 14:45:08 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:23.464 14:45:08 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:23.464 14:45:08 -- setup/devices.sh@63 -- # found=1 00:04:23.464 14:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.464 14:45:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:23.464 14:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.464 14:45:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:23.464 14:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.464 14:45:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:23.464 14:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.464 14:45:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:23.464 14:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.464 14:45:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:23.464 14:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.464 14:45:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:23.464 14:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.464 14:45:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:23.464 14:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.464 14:45:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:23.464 14:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.464 14:45:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:23.464 14:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.464 14:45:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:23.464 14:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.464 14:45:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:23.464 14:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.464 14:45:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:23.464 14:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.464 14:45:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:23.464 14:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.464 14:45:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:23.464 14:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.464 14:45:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:23.464 14:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.464 14:45:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:23.464 14:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.464 14:45:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.464 14:45:09 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:23.464 14:45:09 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.464 14:45:09 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:23.464 14:45:09 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.464 14:45:09 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:23.464 14:45:09 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.464 14:45:09 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.464 14:45:09 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.464 14:45:09 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:23.464 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:23.464 14:45:09 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:23.464 14:45:09 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:23.723 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:23.723 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:23.723 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:23.723 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:23.723 14:45:09 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:23.723 14:45:09 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:23.723 14:45:09 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.723 14:45:09 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:23.723 14:45:09 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:23.723 14:45:09 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.723 14:45:09 -- setup/devices.sh@116 -- # verify 0000:82:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.723 14:45:09 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:23.723 14:45:09 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:23.723 14:45:09 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.723 14:45:09 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.723 14:45:09 -- setup/devices.sh@53 -- # local found=0 00:04:23.723 14:45:09 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:23.980 14:45:09 -- setup/devices.sh@56 -- # : 00:04:23.980 14:45:09 -- setup/devices.sh@59 -- # local pci status 00:04:23.980 14:45:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.980 14:45:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:23.980 14:45:09 -- setup/devices.sh@47 -- # setup output config 00:04:23.980 14:45:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.980 14:45:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:24.942 14:45:10 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:24.942 14:45:10 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:24.942 14:45:10 -- setup/devices.sh@63 -- # found=1 00:04:24.942 14:45:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.942 14:45:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:24.942 14:45:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.942 14:45:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:24.942 14:45:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.942 14:45:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:24.942 14:45:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.943 14:45:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:24.943 14:45:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.943 14:45:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:24.943 14:45:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.943 14:45:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:24.943 14:45:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.943 14:45:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:24.943 14:45:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.943 14:45:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:24.943 14:45:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.943 14:45:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:24.943 14:45:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.943 14:45:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:24.943 14:45:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.943 14:45:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:24.943 14:45:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.943 14:45:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:24.943 14:45:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.943 14:45:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:24.943 14:45:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.943 14:45:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:24.943 14:45:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.943 14:45:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:24.943 14:45:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.943 14:45:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:24.943 14:45:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.943 14:45:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.943 14:45:10 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:24.943 14:45:10 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.943 14:45:10 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.943 14:45:10 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.943 14:45:10 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.943 14:45:10 -- setup/devices.sh@125 -- # verify 0000:82:00.0 data@nvme0n1 '' '' 00:04:24.943 14:45:10 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:24.943 14:45:10 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:24.943 14:45:10 -- setup/devices.sh@50 -- # local mount_point= 00:04:24.943 14:45:10 -- setup/devices.sh@51 -- # local test_file= 00:04:24.943 14:45:10 -- setup/devices.sh@53 -- # local found=0 00:04:24.943 14:45:10 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:24.943 14:45:10 -- setup/devices.sh@59 -- # local pci status 00:04:24.943 14:45:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.943 14:45:10 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:24.943 14:45:10 -- setup/devices.sh@47 -- # setup output config 00:04:24.943 14:45:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.943 14:45:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:26.316 14:45:11 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:26.316 14:45:11 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:26.316 14:45:11 -- setup/devices.sh@63 -- # found=1 00:04:26.316 14:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.316 14:45:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:26.316 14:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.316 14:45:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:26.316 14:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.316 14:45:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:26.316 14:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.316 14:45:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:26.316 14:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.316 14:45:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:26.316 14:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.316 14:45:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:26.316 14:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.316 14:45:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:26.316 14:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.316 14:45:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:26.316 14:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.316 14:45:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:26.316 14:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.316 14:45:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:26.316 14:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.316 14:45:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:26.316 14:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.316 14:45:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:26.316 14:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.316 14:45:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:26.316 14:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.316 14:45:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:26.316 14:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.316 14:45:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:26.316 14:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.316 14:45:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:26.316 14:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.316 14:45:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:26.316 14:45:11 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:26.316 14:45:11 -- setup/devices.sh@68 -- # return 0 00:04:26.316 14:45:11 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:26.316 14:45:11 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.316 14:45:11 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.316 14:45:11 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.316 14:45:11 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:26.316 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:26.316 00:04:26.316 real 0m6.093s 00:04:26.316 user 0m1.392s 00:04:26.316 sys 0m2.318s 00:04:26.316 14:45:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:26.316 14:45:11 -- common/autotest_common.sh@10 -- # set +x 00:04:26.316 ************************************ 00:04:26.316 END TEST nvme_mount 00:04:26.316 ************************************ 00:04:26.316 14:45:12 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:26.316 14:45:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.316 14:45:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.316 14:45:12 -- common/autotest_common.sh@10 -- # set +x 00:04:26.573 ************************************ 00:04:26.573 START TEST dm_mount 00:04:26.573 ************************************ 00:04:26.573 14:45:12 -- common/autotest_common.sh@1111 -- # dm_mount 00:04:26.573 14:45:12 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:26.573 14:45:12 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:26.573 14:45:12 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:26.573 14:45:12 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:26.573 14:45:12 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:26.573 14:45:12 -- setup/common.sh@40 -- # local part_no=2 00:04:26.573 14:45:12 -- setup/common.sh@41 -- # local size=1073741824 00:04:26.573 14:45:12 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:26.573 14:45:12 -- setup/common.sh@44 -- # parts=() 00:04:26.573 14:45:12 -- setup/common.sh@44 -- # local parts 00:04:26.573 14:45:12 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:26.573 14:45:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.573 14:45:12 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:26.573 14:45:12 -- setup/common.sh@46 -- # (( part++ )) 00:04:26.573 14:45:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.573 14:45:12 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:26.573 14:45:12 -- setup/common.sh@46 -- # (( part++ )) 00:04:26.573 14:45:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.573 14:45:12 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:26.573 14:45:12 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:26.573 14:45:12 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:27.509 Creating new GPT entries in memory. 00:04:27.509 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:27.509 other utilities. 00:04:27.509 14:45:13 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:27.509 14:45:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.509 14:45:13 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:27.509 14:45:13 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:27.509 14:45:13 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:28.514 Creating new GPT entries in memory. 00:04:28.514 The operation has completed successfully. 00:04:28.514 14:45:14 -- setup/common.sh@57 -- # (( part++ )) 00:04:28.514 14:45:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.514 14:45:14 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:28.514 14:45:14 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:28.514 14:45:14 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:29.450 The operation has completed successfully. 00:04:29.450 14:45:15 -- setup/common.sh@57 -- # (( part++ )) 00:04:29.450 14:45:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.450 14:45:15 -- setup/common.sh@62 -- # wait 3640387 00:04:29.450 14:45:15 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:29.450 14:45:15 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.450 14:45:15 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:29.450 14:45:15 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:29.450 14:45:15 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:29.450 14:45:15 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:29.450 14:45:15 -- setup/devices.sh@161 -- # break 00:04:29.450 14:45:15 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:29.450 14:45:15 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:29.450 14:45:15 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:29.450 14:45:15 -- setup/devices.sh@166 -- # dm=dm-0 00:04:29.450 14:45:15 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:29.450 14:45:15 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:29.450 14:45:15 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.450 14:45:15 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:29.450 14:45:15 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.450 14:45:15 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:29.450 14:45:15 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:29.708 14:45:15 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.708 14:45:15 -- setup/devices.sh@174 -- # verify 0000:82:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:29.708 14:45:15 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:29.708 14:45:15 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:29.708 14:45:15 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.708 14:45:15 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:29.708 14:45:15 -- setup/devices.sh@53 -- # local found=0 00:04:29.708 14:45:15 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:29.708 14:45:15 -- setup/devices.sh@56 -- # : 00:04:29.708 14:45:15 -- setup/devices.sh@59 -- # local pci status 00:04:29.708 14:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.708 14:45:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:29.708 14:45:15 -- setup/devices.sh@47 -- # setup output config 00:04:29.708 14:45:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.708 14:45:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:30.642 14:45:16 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:30.642 14:45:16 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:30.642 14:45:16 -- setup/devices.sh@63 -- # found=1 00:04:30.642 14:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.642 14:45:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:30.642 14:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.642 14:45:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:30.642 14:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.642 14:45:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:30.642 14:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.642 14:45:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:30.642 14:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.642 14:45:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:30.642 14:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.642 14:45:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:30.642 14:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.642 14:45:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:30.642 14:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.642 14:45:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:30.642 14:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.642 14:45:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:30.642 14:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.642 14:45:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:30.642 14:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.642 14:45:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:30.642 14:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.642 14:45:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:30.642 14:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.642 14:45:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:30.642 14:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.642 14:45:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:30.642 14:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.642 14:45:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:30.642 14:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.642 14:45:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:30.642 14:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.642 14:45:16 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.642 14:45:16 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:30.642 14:45:16 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:30.642 14:45:16 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:30.642 14:45:16 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:30.642 14:45:16 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:30.642 14:45:16 -- setup/devices.sh@184 -- # verify 0000:82:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:30.642 14:45:16 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:30.642 14:45:16 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:30.642 14:45:16 -- setup/devices.sh@50 -- # local mount_point= 00:04:30.642 14:45:16 -- setup/devices.sh@51 -- # local test_file= 00:04:30.642 14:45:16 -- setup/devices.sh@53 -- # local found=0 00:04:30.642 14:45:16 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:30.642 14:45:16 -- setup/devices.sh@59 -- # local pci status 00:04:30.642 14:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.642 14:45:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:30.642 14:45:16 -- setup/devices.sh@47 -- # setup output config 00:04:30.642 14:45:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.642 14:45:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:32.016 14:45:17 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:32.016 14:45:17 -- setup/devices.sh@63 -- # found=1 00:04:32.016 14:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.016 14:45:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.016 14:45:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.016 14:45:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.016 14:45:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.016 14:45:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.016 14:45:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.016 14:45:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.016 14:45:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.016 14:45:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.016 14:45:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.016 14:45:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.016 14:45:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.016 14:45:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.016 14:45:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.016 14:45:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.016 14:45:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.016 14:45:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.016 14:45:17 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:32.016 14:45:17 -- setup/devices.sh@68 -- # return 0 00:04:32.016 14:45:17 -- setup/devices.sh@187 -- # cleanup_dm 00:04:32.016 14:45:17 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:32.016 14:45:17 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:32.016 14:45:17 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:32.016 14:45:17 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:32.016 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:32.016 14:45:17 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:32.016 00:04:32.016 real 0m5.457s 00:04:32.016 user 0m0.940s 00:04:32.016 sys 0m1.411s 00:04:32.016 14:45:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:32.016 14:45:17 -- common/autotest_common.sh@10 -- # set +x 00:04:32.016 ************************************ 00:04:32.016 END TEST dm_mount 00:04:32.016 ************************************ 00:04:32.016 14:45:17 -- setup/devices.sh@1 -- # cleanup 00:04:32.016 14:45:17 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:32.016 14:45:17 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.016 14:45:17 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:32.016 14:45:17 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:32.016 14:45:17 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:32.275 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:32.275 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:32.275 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:32.275 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:32.275 14:45:17 -- setup/devices.sh@12 -- # cleanup_dm 00:04:32.275 14:45:17 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:32.275 14:45:17 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:32.275 14:45:17 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:32.275 14:45:17 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:32.275 14:45:17 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:32.275 14:45:17 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:32.275 00:04:32.275 real 0m13.596s 00:04:32.275 user 0m3.041s 00:04:32.275 sys 0m4.820s 00:04:32.275 14:45:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:32.275 14:45:17 -- common/autotest_common.sh@10 -- # set +x 00:04:32.275 ************************************ 00:04:32.275 END TEST devices 00:04:32.275 ************************************ 00:04:32.275 00:04:32.275 real 0m43.242s 00:04:32.275 user 0m12.505s 00:04:32.275 sys 0m19.105s 00:04:32.275 14:45:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:32.275 14:45:17 -- common/autotest_common.sh@10 -- # set +x 00:04:32.275 ************************************ 00:04:32.275 END TEST setup.sh 00:04:32.275 ************************************ 00:04:32.275 14:45:17 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:33.650 Hugepages 00:04:33.650 node hugesize free / total 00:04:33.650 node0 1048576kB 0 / 0 00:04:33.650 node0 2048kB 2048 / 2048 00:04:33.650 node1 1048576kB 0 / 0 00:04:33.650 node1 2048kB 0 / 0 00:04:33.650 00:04:33.650 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:33.650 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:33.650 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:33.650 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:33.650 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:33.650 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:33.650 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:33.650 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:33.650 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:33.650 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:33.650 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:33.650 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:33.650 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:33.650 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:33.650 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:33.650 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:33.650 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:33.650 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:33.650 14:45:19 -- spdk/autotest.sh@130 -- # uname -s 00:04:33.650 14:45:19 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:33.650 14:45:19 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:33.650 14:45:19 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:35.023 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:35.023 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:35.023 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:35.023 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:35.023 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:35.023 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:35.023 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:35.023 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:35.023 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:35.023 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:35.023 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:35.023 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:35.023 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:35.023 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:35.023 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:35.023 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:35.958 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:35.958 14:45:21 -- common/autotest_common.sh@1518 -- # sleep 1 00:04:36.892 14:45:22 -- common/autotest_common.sh@1519 -- # bdfs=() 00:04:36.892 14:45:22 -- common/autotest_common.sh@1519 -- # local bdfs 00:04:36.892 14:45:22 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:36.892 14:45:22 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:36.893 14:45:22 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:36.893 14:45:22 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:36.893 14:45:22 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:36.893 14:45:22 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:36.893 14:45:22 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:36.893 14:45:22 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:36.893 14:45:22 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:82:00.0 00:04:36.893 14:45:22 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.267 Waiting for block devices as requested 00:04:38.267 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:04:38.267 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:38.267 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:38.267 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:38.267 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:38.525 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:38.525 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:38.525 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:38.525 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:38.525 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:38.782 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:38.782 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:38.782 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:39.039 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:39.039 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:39.039 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:39.039 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:39.299 14:45:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:39.299 14:45:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:04:39.299 14:45:24 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:04:39.299 14:45:24 -- common/autotest_common.sh@1488 -- # grep 0000:82:00.0/nvme/nvme 00:04:39.299 14:45:24 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:04:39.299 14:45:24 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:04:39.299 14:45:24 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:04:39.299 14:45:24 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:04:39.299 14:45:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:39.299 14:45:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:39.299 14:45:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:39.299 14:45:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:39.299 14:45:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:39.299 14:45:24 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:39.299 14:45:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:39.299 14:45:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:39.299 14:45:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:39.299 14:45:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:39.299 14:45:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:39.299 14:45:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:39.299 14:45:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:39.299 14:45:24 -- common/autotest_common.sh@1543 -- # continue 00:04:39.299 14:45:24 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:39.299 14:45:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:39.299 14:45:24 -- common/autotest_common.sh@10 -- # set +x 00:04:39.299 14:45:24 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:39.299 14:45:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:39.299 14:45:24 -- common/autotest_common.sh@10 -- # set +x 00:04:39.299 14:45:24 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.675 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:40.675 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:40.675 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:40.675 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:40.675 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:40.675 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:40.675 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:40.675 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:40.675 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:40.675 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:40.675 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:40.675 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:40.675 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:40.675 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:40.675 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:40.675 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:41.611 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:41.611 14:45:27 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:41.611 14:45:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:41.611 14:45:27 -- common/autotest_common.sh@10 -- # set +x 00:04:41.611 14:45:27 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:41.611 14:45:27 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:04:41.611 14:45:27 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:04:41.611 14:45:27 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:41.611 14:45:27 -- common/autotest_common.sh@1563 -- # local bdfs 00:04:41.611 14:45:27 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:04:41.611 14:45:27 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:41.611 14:45:27 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:41.611 14:45:27 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.611 14:45:27 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:41.611 14:45:27 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:41.611 14:45:27 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:41.611 14:45:27 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:82:00.0 00:04:41.611 14:45:27 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:41.611 14:45:27 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:04:41.611 14:45:27 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:41.611 14:45:27 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:41.611 14:45:27 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:41.611 14:45:27 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:82:00.0 00:04:41.611 14:45:27 -- common/autotest_common.sh@1578 -- # [[ -z 0000:82:00.0 ]] 00:04:41.611 14:45:27 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=3645595 00:04:41.611 14:45:27 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.611 14:45:27 -- common/autotest_common.sh@1584 -- # waitforlisten 3645595 00:04:41.611 14:45:27 -- common/autotest_common.sh@817 -- # '[' -z 3645595 ']' 00:04:41.611 14:45:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.611 14:45:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:41.611 14:45:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.611 14:45:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:41.611 14:45:27 -- common/autotest_common.sh@10 -- # set +x 00:04:41.611 [2024-04-26 14:45:27.319672] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:04:41.611 [2024-04-26 14:45:27.319788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645595 ] 00:04:41.611 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.870 [2024-04-26 14:45:27.354345] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:41.870 [2024-04-26 14:45:27.381126] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.870 [2024-04-26 14:45:27.468566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.128 14:45:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:42.128 14:45:27 -- common/autotest_common.sh@850 -- # return 0 00:04:42.128 14:45:27 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:04:42.128 14:45:27 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:04:42.128 14:45:27 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:04:45.413 nvme0n1 00:04:45.413 14:45:30 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:45.413 [2024-04-26 14:45:31.023127] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:45.413 [2024-04-26 14:45:31.023175] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:45.413 request: 00:04:45.413 { 00:04:45.413 "nvme_ctrlr_name": "nvme0", 00:04:45.413 "password": "test", 00:04:45.413 "method": "bdev_nvme_opal_revert", 00:04:45.413 "req_id": 1 00:04:45.413 } 00:04:45.413 Got JSON-RPC error response 00:04:45.413 response: 00:04:45.413 { 00:04:45.413 "code": -32603, 00:04:45.413 "message": "Internal error" 00:04:45.413 } 00:04:45.413 14:45:31 -- common/autotest_common.sh@1590 -- # true 00:04:45.413 14:45:31 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:04:45.413 14:45:31 -- common/autotest_common.sh@1594 -- # killprocess 3645595 00:04:45.413 14:45:31 -- common/autotest_common.sh@936 -- # '[' -z 3645595 ']' 00:04:45.413 14:45:31 -- common/autotest_common.sh@940 -- # kill -0 3645595 00:04:45.413 14:45:31 -- common/autotest_common.sh@941 -- # uname 00:04:45.413 14:45:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:45.413 14:45:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3645595 00:04:45.413 14:45:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:45.413 14:45:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:45.413 14:45:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3645595' 00:04:45.413 killing process with pid 3645595 00:04:45.413 14:45:31 -- common/autotest_common.sh@955 -- # kill 3645595 00:04:45.413 14:45:31 -- common/autotest_common.sh@960 -- # wait 3645595 00:04:47.309 14:45:32 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:47.309 14:45:32 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:47.309 14:45:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:47.309 14:45:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:47.309 14:45:32 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:47.309 14:45:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:47.309 14:45:32 -- common/autotest_common.sh@10 -- # set +x 00:04:47.309 14:45:32 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:47.309 14:45:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.309 14:45:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.309 14:45:32 -- common/autotest_common.sh@10 -- # set +x 00:04:47.309 ************************************ 00:04:47.309 START TEST env 00:04:47.309 ************************************ 00:04:47.309 14:45:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:47.309 * Looking for test storage... 00:04:47.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:47.309 14:45:32 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:47.309 14:45:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.309 14:45:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.309 14:45:32 -- common/autotest_common.sh@10 -- # set +x 00:04:47.567 ************************************ 00:04:47.567 START TEST env_memory 00:04:47.567 ************************************ 00:04:47.567 14:45:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:47.567 00:04:47.567 00:04:47.567 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.567 http://cunit.sourceforge.net/ 00:04:47.567 00:04:47.567 00:04:47.567 Suite: memory 00:04:47.567 Test: alloc and free memory map ...[2024-04-26 14:45:33.112295] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:47.567 passed 00:04:47.567 Test: mem map translation ...[2024-04-26 14:45:33.132313] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:47.567 [2024-04-26 14:45:33.132351] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:47.567 [2024-04-26 14:45:33.132393] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:47.567 [2024-04-26 14:45:33.132405] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:47.567 passed 00:04:47.567 Test: mem map registration ...[2024-04-26 14:45:33.172716] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:47.567 [2024-04-26 14:45:33.172735] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:47.567 passed 00:04:47.567 Test: mem map adjacent registrations ...passed 00:04:47.567 00:04:47.567 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.567 suites 1 1 n/a 0 0 00:04:47.567 tests 4 4 4 0 0 00:04:47.567 asserts 152 152 152 0 n/a 00:04:47.567 00:04:47.567 Elapsed time = 0.140 seconds 00:04:47.567 00:04:47.567 real 0m0.148s 00:04:47.567 user 0m0.140s 00:04:47.567 sys 0m0.008s 00:04:47.567 14:45:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.567 14:45:33 -- common/autotest_common.sh@10 -- # set +x 00:04:47.567 ************************************ 00:04:47.567 END TEST env_memory 00:04:47.567 ************************************ 00:04:47.567 14:45:33 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:47.567 14:45:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.567 14:45:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.567 14:45:33 -- common/autotest_common.sh@10 -- # set +x 00:04:47.827 ************************************ 00:04:47.827 START TEST env_vtophys 00:04:47.827 ************************************ 00:04:47.827 14:45:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:47.827 EAL: lib.eal log level changed from notice to debug 00:04:47.827 EAL: Detected lcore 0 as core 0 on socket 0 00:04:47.827 EAL: Detected lcore 1 as core 1 on socket 0 00:04:47.827 EAL: Detected lcore 2 as core 2 on socket 0 00:04:47.827 EAL: Detected lcore 3 as core 3 on socket 0 00:04:47.827 EAL: Detected lcore 4 as core 4 on socket 0 00:04:47.827 EAL: Detected lcore 5 as core 5 on socket 0 00:04:47.827 EAL: Detected lcore 6 as core 8 on socket 0 00:04:47.827 EAL: Detected lcore 7 as core 9 on socket 0 00:04:47.827 EAL: Detected lcore 8 as core 10 on socket 0 00:04:47.827 EAL: Detected lcore 9 as core 11 on socket 0 00:04:47.827 EAL: Detected lcore 10 as core 12 on socket 0 00:04:47.827 EAL: Detected lcore 11 as core 13 on socket 0 00:04:47.827 EAL: Detected lcore 12 as core 0 on socket 1 00:04:47.827 EAL: Detected lcore 13 as core 1 on socket 1 00:04:47.827 EAL: Detected lcore 14 as core 2 on socket 1 00:04:47.827 EAL: Detected lcore 15 as core 3 on socket 1 00:04:47.827 EAL: Detected lcore 16 as core 4 on socket 1 00:04:47.827 EAL: Detected lcore 17 as core 5 on socket 1 00:04:47.827 EAL: Detected lcore 18 as core 8 on socket 1 00:04:47.827 EAL: Detected lcore 19 as core 9 on socket 1 00:04:47.827 EAL: Detected lcore 20 as core 10 on socket 1 00:04:47.827 EAL: Detected lcore 21 as core 11 on socket 1 00:04:47.827 EAL: Detected lcore 22 as core 12 on socket 1 00:04:47.827 EAL: Detected lcore 23 as core 13 on socket 1 00:04:47.827 EAL: Detected lcore 24 as core 0 on socket 0 00:04:47.827 EAL: Detected lcore 25 as core 1 on socket 0 00:04:47.827 EAL: Detected lcore 26 as core 2 on socket 0 00:04:47.827 EAL: Detected lcore 27 as core 3 on socket 0 00:04:47.827 EAL: Detected lcore 28 as core 4 on socket 0 00:04:47.827 EAL: Detected lcore 29 as core 5 on socket 0 00:04:47.827 EAL: Detected lcore 30 as core 8 on socket 0 00:04:47.827 EAL: Detected lcore 31 as core 9 on socket 0 00:04:47.827 EAL: Detected lcore 32 as core 10 on socket 0 00:04:47.827 EAL: Detected lcore 33 as core 11 on socket 0 00:04:47.827 EAL: Detected lcore 34 as core 12 on socket 0 00:04:47.827 EAL: Detected lcore 35 as core 13 on socket 0 00:04:47.827 EAL: Detected lcore 36 as core 0 on socket 1 00:04:47.827 EAL: Detected lcore 37 as core 1 on socket 1 00:04:47.827 EAL: Detected lcore 38 as core 2 on socket 1 00:04:47.827 EAL: Detected lcore 39 as core 3 on socket 1 00:04:47.827 EAL: Detected lcore 40 as core 4 on socket 1 00:04:47.827 EAL: Detected lcore 41 as core 5 on socket 1 00:04:47.827 EAL: Detected lcore 42 as core 8 on socket 1 00:04:47.827 EAL: Detected lcore 43 as core 9 on socket 1 00:04:47.827 EAL: Detected lcore 44 as core 10 on socket 1 00:04:47.827 EAL: Detected lcore 45 as core 11 on socket 1 00:04:47.827 EAL: Detected lcore 46 as core 12 on socket 1 00:04:47.827 EAL: Detected lcore 47 as core 13 on socket 1 00:04:47.827 EAL: Maximum logical cores by configuration: 128 00:04:47.827 EAL: Detected CPU lcores: 48 00:04:47.827 EAL: Detected NUMA nodes: 2 00:04:47.827 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:04:47.827 EAL: Detected shared linkage of DPDK 00:04:47.827 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:04:47.827 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:04:47.827 EAL: Registered [vdev] bus. 00:04:47.827 EAL: bus.vdev log level changed from disabled to notice 00:04:47.827 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:04:47.827 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:04:47.827 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:47.827 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:47.827 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:04:47.827 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:04:47.827 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:04:47.827 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:04:47.827 EAL: No shared files mode enabled, IPC will be disabled 00:04:47.827 EAL: No shared files mode enabled, IPC is disabled 00:04:47.827 EAL: Bus pci wants IOVA as 'DC' 00:04:47.827 EAL: Bus vdev wants IOVA as 'DC' 00:04:47.827 EAL: Buses did not request a specific IOVA mode. 00:04:47.827 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:47.827 EAL: Selected IOVA mode 'VA' 00:04:47.827 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.827 EAL: Probing VFIO support... 00:04:47.827 EAL: IOMMU type 1 (Type 1) is supported 00:04:47.827 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:47.827 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:47.827 EAL: VFIO support initialized 00:04:47.827 EAL: Ask a virtual area of 0x2e000 bytes 00:04:47.827 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:47.827 EAL: Setting up physically contiguous memory... 00:04:47.827 EAL: Setting maximum number of open files to 524288 00:04:47.827 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:47.827 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:47.827 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:47.827 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.827 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:47.827 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.827 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.827 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:47.827 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:47.827 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.827 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:47.827 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.827 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.827 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:47.827 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:47.827 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.827 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:47.827 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.827 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.827 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:47.827 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:47.827 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.827 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:47.827 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.827 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.827 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:47.827 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:47.827 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:47.827 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.827 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:47.827 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:47.827 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.827 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:47.827 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:47.827 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.827 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:47.827 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:47.827 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.827 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:47.827 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:47.827 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.827 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:47.827 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:47.827 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.827 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:47.827 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:47.827 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.827 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:47.827 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:47.827 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.827 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:47.827 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:47.827 EAL: Hugepages will be freed exactly as allocated. 00:04:47.827 EAL: No shared files mode enabled, IPC is disabled 00:04:47.827 EAL: No shared files mode enabled, IPC is disabled 00:04:47.827 EAL: TSC frequency is ~2700000 KHz 00:04:47.827 EAL: Main lcore 0 is ready (tid=7f28ab1dda00;cpuset=[0]) 00:04:47.827 EAL: Trying to obtain current memory policy. 00:04:47.827 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.827 EAL: Restoring previous memory policy: 0 00:04:47.827 EAL: request: mp_malloc_sync 00:04:47.827 EAL: No shared files mode enabled, IPC is disabled 00:04:47.827 EAL: Heap on socket 0 was expanded by 2MB 00:04:47.827 EAL: PCI device 0000:0e:00.0 on NUMA socket 0 00:04:47.827 EAL: probe driver: 8086:1583 net_i40e 00:04:47.827 EAL: Not managed by a supported kernel driver, skipped 00:04:47.827 EAL: PCI device 0000:0e:00.1 on NUMA socket 0 00:04:47.827 EAL: probe driver: 8086:1583 net_i40e 00:04:47.827 EAL: Not managed by a supported kernel driver, skipped 00:04:47.827 EAL: No shared files mode enabled, IPC is disabled 00:04:47.827 EAL: No shared files mode enabled, IPC is disabled 00:04:47.827 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:47.827 EAL: Mem event callback 'spdk:(nil)' registered 00:04:47.827 00:04:47.827 00:04:47.827 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.828 http://cunit.sourceforge.net/ 00:04:47.828 00:04:47.828 00:04:47.828 Suite: components_suite 00:04:47.828 Test: vtophys_malloc_test ...passed 00:04:47.828 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:47.828 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.828 EAL: Restoring previous memory policy: 4 00:04:47.828 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.828 EAL: request: mp_malloc_sync 00:04:47.828 EAL: No shared files mode enabled, IPC is disabled 00:04:47.828 EAL: Heap on socket 0 was expanded by 4MB 00:04:47.828 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.828 EAL: request: mp_malloc_sync 00:04:47.828 EAL: No shared files mode enabled, IPC is disabled 00:04:47.828 EAL: Heap on socket 0 was shrunk by 4MB 00:04:47.828 EAL: Trying to obtain current memory policy. 00:04:47.828 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.828 EAL: Restoring previous memory policy: 4 00:04:47.828 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.828 EAL: request: mp_malloc_sync 00:04:47.828 EAL: No shared files mode enabled, IPC is disabled 00:04:47.828 EAL: Heap on socket 0 was expanded by 6MB 00:04:47.828 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.828 EAL: request: mp_malloc_sync 00:04:47.828 EAL: No shared files mode enabled, IPC is disabled 00:04:47.828 EAL: Heap on socket 0 was shrunk by 6MB 00:04:47.828 EAL: Trying to obtain current memory policy. 00:04:47.828 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.828 EAL: Restoring previous memory policy: 4 00:04:47.828 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.828 EAL: request: mp_malloc_sync 00:04:47.828 EAL: No shared files mode enabled, IPC is disabled 00:04:47.828 EAL: Heap on socket 0 was expanded by 10MB 00:04:47.828 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.828 EAL: request: mp_malloc_sync 00:04:47.828 EAL: No shared files mode enabled, IPC is disabled 00:04:47.828 EAL: Heap on socket 0 was shrunk by 10MB 00:04:47.828 EAL: Trying to obtain current memory policy. 00:04:47.828 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.828 EAL: Restoring previous memory policy: 4 00:04:47.828 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.828 EAL: request: mp_malloc_sync 00:04:47.828 EAL: No shared files mode enabled, IPC is disabled 00:04:47.828 EAL: Heap on socket 0 was expanded by 18MB 00:04:47.828 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.828 EAL: request: mp_malloc_sync 00:04:47.828 EAL: No shared files mode enabled, IPC is disabled 00:04:47.828 EAL: Heap on socket 0 was shrunk by 18MB 00:04:47.828 EAL: Trying to obtain current memory policy. 00:04:47.828 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.828 EAL: Restoring previous memory policy: 4 00:04:47.828 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.828 EAL: request: mp_malloc_sync 00:04:47.828 EAL: No shared files mode enabled, IPC is disabled 00:04:47.828 EAL: Heap on socket 0 was expanded by 34MB 00:04:47.828 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.828 EAL: request: mp_malloc_sync 00:04:47.828 EAL: No shared files mode enabled, IPC is disabled 00:04:47.828 EAL: Heap on socket 0 was shrunk by 34MB 00:04:47.828 EAL: Trying to obtain current memory policy. 00:04:47.828 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.828 EAL: Restoring previous memory policy: 4 00:04:47.828 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.828 EAL: request: mp_malloc_sync 00:04:47.828 EAL: No shared files mode enabled, IPC is disabled 00:04:47.828 EAL: Heap on socket 0 was expanded by 66MB 00:04:47.828 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.828 EAL: request: mp_malloc_sync 00:04:47.828 EAL: No shared files mode enabled, IPC is disabled 00:04:47.828 EAL: Heap on socket 0 was shrunk by 66MB 00:04:47.828 EAL: Trying to obtain current memory policy. 00:04:47.828 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.828 EAL: Restoring previous memory policy: 4 00:04:47.828 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.828 EAL: request: mp_malloc_sync 00:04:47.828 EAL: No shared files mode enabled, IPC is disabled 00:04:47.828 EAL: Heap on socket 0 was expanded by 130MB 00:04:47.828 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.087 EAL: request: mp_malloc_sync 00:04:48.087 EAL: No shared files mode enabled, IPC is disabled 00:04:48.087 EAL: Heap on socket 0 was shrunk by 130MB 00:04:48.087 EAL: Trying to obtain current memory policy. 00:04:48.087 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.087 EAL: Restoring previous memory policy: 4 00:04:48.087 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.087 EAL: request: mp_malloc_sync 00:04:48.087 EAL: No shared files mode enabled, IPC is disabled 00:04:48.087 EAL: Heap on socket 0 was expanded by 258MB 00:04:48.087 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.087 EAL: request: mp_malloc_sync 00:04:48.087 EAL: No shared files mode enabled, IPC is disabled 00:04:48.087 EAL: Heap on socket 0 was shrunk by 258MB 00:04:48.087 EAL: Trying to obtain current memory policy. 00:04:48.087 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.345 EAL: Restoring previous memory policy: 4 00:04:48.345 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.345 EAL: request: mp_malloc_sync 00:04:48.345 EAL: No shared files mode enabled, IPC is disabled 00:04:48.345 EAL: Heap on socket 0 was expanded by 514MB 00:04:48.345 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.603 EAL: request: mp_malloc_sync 00:04:48.603 EAL: No shared files mode enabled, IPC is disabled 00:04:48.603 EAL: Heap on socket 0 was shrunk by 514MB 00:04:48.603 EAL: Trying to obtain current memory policy. 00:04:48.603 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.861 EAL: Restoring previous memory policy: 4 00:04:48.861 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.861 EAL: request: mp_malloc_sync 00:04:48.861 EAL: No shared files mode enabled, IPC is disabled 00:04:48.861 EAL: Heap on socket 0 was expanded by 1026MB 00:04:49.118 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.376 EAL: request: mp_malloc_sync 00:04:49.376 EAL: No shared files mode enabled, IPC is disabled 00:04:49.376 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:49.376 passed 00:04:49.376 00:04:49.376 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.376 suites 1 1 n/a 0 0 00:04:49.376 tests 2 2 2 0 0 00:04:49.376 asserts 497 497 497 0 n/a 00:04:49.376 00:04:49.376 Elapsed time = 1.389 seconds 00:04:49.376 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.376 EAL: request: mp_malloc_sync 00:04:49.376 EAL: No shared files mode enabled, IPC is disabled 00:04:49.376 EAL: Heap on socket 0 was shrunk by 2MB 00:04:49.376 EAL: No shared files mode enabled, IPC is disabled 00:04:49.376 EAL: No shared files mode enabled, IPC is disabled 00:04:49.376 EAL: No shared files mode enabled, IPC is disabled 00:04:49.376 00:04:49.376 real 0m1.519s 00:04:49.376 user 0m0.873s 00:04:49.376 sys 0m0.601s 00:04:49.376 14:45:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.376 14:45:34 -- common/autotest_common.sh@10 -- # set +x 00:04:49.376 ************************************ 00:04:49.376 END TEST env_vtophys 00:04:49.376 ************************************ 00:04:49.376 14:45:34 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:49.376 14:45:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.376 14:45:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.376 14:45:34 -- common/autotest_common.sh@10 -- # set +x 00:04:49.376 ************************************ 00:04:49.376 START TEST env_pci 00:04:49.376 ************************************ 00:04:49.376 14:45:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:49.376 00:04:49.376 00:04:49.376 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.376 http://cunit.sourceforge.net/ 00:04:49.376 00:04:49.376 00:04:49.376 Suite: pci 00:04:49.376 Test: pci_hook ...[2024-04-26 14:45:35.011136] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3646541 has claimed it 00:04:49.376 EAL: Cannot find device (10000:00:01.0) 00:04:49.376 EAL: Failed to attach device on primary process 00:04:49.376 passed 00:04:49.376 00:04:49.376 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.376 suites 1 1 n/a 0 0 00:04:49.376 tests 1 1 1 0 0 00:04:49.376 asserts 25 25 25 0 n/a 00:04:49.376 00:04:49.376 Elapsed time = 0.020 seconds 00:04:49.377 00:04:49.377 real 0m0.033s 00:04:49.377 user 0m0.009s 00:04:49.377 sys 0m0.024s 00:04:49.377 14:45:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.377 14:45:35 -- common/autotest_common.sh@10 -- # set +x 00:04:49.377 ************************************ 00:04:49.377 END TEST env_pci 00:04:49.377 ************************************ 00:04:49.377 14:45:35 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:49.377 14:45:35 -- env/env.sh@15 -- # uname 00:04:49.377 14:45:35 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:49.377 14:45:35 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:49.377 14:45:35 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:49.377 14:45:35 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:49.377 14:45:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.377 14:45:35 -- common/autotest_common.sh@10 -- # set +x 00:04:49.634 ************************************ 00:04:49.634 START TEST env_dpdk_post_init 00:04:49.634 ************************************ 00:04:49.634 14:45:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:49.634 EAL: Detected CPU lcores: 48 00:04:49.634 EAL: Detected NUMA nodes: 2 00:04:49.634 EAL: Detected shared linkage of DPDK 00:04:49.634 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:49.634 EAL: Selected IOVA mode 'VA' 00:04:49.634 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.634 EAL: VFIO support initialized 00:04:49.634 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:49.634 EAL: Using IOMMU type 1 (Type 1) 00:04:49.634 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:49.634 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:49.634 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:49.634 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:49.634 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:49.634 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:49.634 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:49.634 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:49.893 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:49.893 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:49.893 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:49.893 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:49.893 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:49.893 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:49.893 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:49.893 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:50.826 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:04:54.115 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:04:54.115 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:04:54.115 Starting DPDK initialization... 00:04:54.115 Starting SPDK post initialization... 00:04:54.115 SPDK NVMe probe 00:04:54.115 Attaching to 0000:82:00.0 00:04:54.115 Attached to 0000:82:00.0 00:04:54.115 Cleaning up... 00:04:54.115 00:04:54.115 real 0m4.389s 00:04:54.115 user 0m3.244s 00:04:54.115 sys 0m0.205s 00:04:54.115 14:45:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.115 14:45:39 -- common/autotest_common.sh@10 -- # set +x 00:04:54.115 ************************************ 00:04:54.115 END TEST env_dpdk_post_init 00:04:54.115 ************************************ 00:04:54.115 14:45:39 -- env/env.sh@26 -- # uname 00:04:54.115 14:45:39 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:54.115 14:45:39 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:54.115 14:45:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.115 14:45:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.115 14:45:39 -- common/autotest_common.sh@10 -- # set +x 00:04:54.115 ************************************ 00:04:54.115 START TEST env_mem_callbacks 00:04:54.115 ************************************ 00:04:54.115 14:45:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:54.115 EAL: Detected CPU lcores: 48 00:04:54.115 EAL: Detected NUMA nodes: 2 00:04:54.115 EAL: Detected shared linkage of DPDK 00:04:54.115 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:54.115 EAL: Selected IOVA mode 'VA' 00:04:54.115 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.115 EAL: VFIO support initialized 00:04:54.115 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:54.115 00:04:54.115 00:04:54.115 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.115 http://cunit.sourceforge.net/ 00:04:54.115 00:04:54.115 00:04:54.115 Suite: memory 00:04:54.115 Test: test ... 00:04:54.115 register 0x200000200000 2097152 00:04:54.115 malloc 3145728 00:04:54.115 register 0x200000400000 4194304 00:04:54.115 buf 0x200000500000 len 3145728 PASSED 00:04:54.115 malloc 64 00:04:54.115 buf 0x2000004fff40 len 64 PASSED 00:04:54.115 malloc 4194304 00:04:54.115 register 0x200000800000 6291456 00:04:54.115 buf 0x200000a00000 len 4194304 PASSED 00:04:54.115 free 0x200000500000 3145728 00:04:54.115 free 0x2000004fff40 64 00:04:54.115 unregister 0x200000400000 4194304 PASSED 00:04:54.115 free 0x200000a00000 4194304 00:04:54.115 unregister 0x200000800000 6291456 PASSED 00:04:54.115 malloc 8388608 00:04:54.115 register 0x200000400000 10485760 00:04:54.115 buf 0x200000600000 len 8388608 PASSED 00:04:54.115 free 0x200000600000 8388608 00:04:54.115 unregister 0x200000400000 10485760 PASSED 00:04:54.115 passed 00:04:54.115 00:04:54.115 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.115 suites 1 1 n/a 0 0 00:04:54.116 tests 1 1 1 0 0 00:04:54.116 asserts 15 15 15 0 n/a 00:04:54.116 00:04:54.116 Elapsed time = 0.005 seconds 00:04:54.116 00:04:54.116 real 0m0.049s 00:04:54.116 user 0m0.014s 00:04:54.116 sys 0m0.035s 00:04:54.116 14:45:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.116 14:45:39 -- common/autotest_common.sh@10 -- # set +x 00:04:54.116 ************************************ 00:04:54.116 END TEST env_mem_callbacks 00:04:54.116 ************************************ 00:04:54.116 00:04:54.116 real 0m6.820s 00:04:54.116 user 0m4.520s 00:04:54.116 sys 0m1.270s 00:04:54.116 14:45:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.116 14:45:39 -- common/autotest_common.sh@10 -- # set +x 00:04:54.116 ************************************ 00:04:54.116 END TEST env 00:04:54.116 ************************************ 00:04:54.116 14:45:39 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:54.116 14:45:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.116 14:45:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.116 14:45:39 -- common/autotest_common.sh@10 -- # set +x 00:04:54.374 ************************************ 00:04:54.374 START TEST rpc 00:04:54.374 ************************************ 00:04:54.374 14:45:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:54.374 * Looking for test storage... 00:04:54.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:54.374 14:45:39 -- rpc/rpc.sh@65 -- # spdk_pid=3647312 00:04:54.374 14:45:39 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:54.374 14:45:39 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.374 14:45:39 -- rpc/rpc.sh@67 -- # waitforlisten 3647312 00:04:54.374 14:45:39 -- common/autotest_common.sh@817 -- # '[' -z 3647312 ']' 00:04:54.374 14:45:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.374 14:45:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:54.374 14:45:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.374 14:45:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:54.374 14:45:39 -- common/autotest_common.sh@10 -- # set +x 00:04:54.374 [2024-04-26 14:45:39.974239] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:04:54.374 [2024-04-26 14:45:39.974355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647312 ] 00:04:54.374 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.374 [2024-04-26 14:45:40.006622] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:54.374 [2024-04-26 14:45:40.036885] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.632 [2024-04-26 14:45:40.124270] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:54.632 [2024-04-26 14:45:40.124327] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3647312' to capture a snapshot of events at runtime. 00:04:54.632 [2024-04-26 14:45:40.124350] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:54.632 [2024-04-26 14:45:40.124376] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:54.632 [2024-04-26 14:45:40.124387] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3647312 for offline analysis/debug. 00:04:54.632 [2024-04-26 14:45:40.124415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.632 14:45:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:54.632 14:45:40 -- common/autotest_common.sh@850 -- # return 0 00:04:54.632 14:45:40 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:54.632 14:45:40 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:54.632 14:45:40 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:54.632 14:45:40 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:54.632 14:45:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.632 14:45:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.633 14:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:54.891 ************************************ 00:04:54.891 START TEST rpc_integrity 00:04:54.891 ************************************ 00:04:54.891 14:45:40 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:54.891 14:45:40 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:54.891 14:45:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.891 14:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:54.891 14:45:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.891 14:45:40 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:54.891 14:45:40 -- rpc/rpc.sh@13 -- # jq length 00:04:54.891 14:45:40 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:54.891 14:45:40 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:54.891 14:45:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.891 14:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:54.891 14:45:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.891 14:45:40 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:54.891 14:45:40 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:54.891 14:45:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.891 14:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:54.891 14:45:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.891 14:45:40 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:54.891 { 00:04:54.891 "name": "Malloc0", 00:04:54.891 "aliases": [ 00:04:54.891 "2b804843-b3c5-4880-9542-ae65a141a214" 00:04:54.891 ], 00:04:54.891 "product_name": "Malloc disk", 00:04:54.891 "block_size": 512, 00:04:54.891 "num_blocks": 16384, 00:04:54.891 "uuid": "2b804843-b3c5-4880-9542-ae65a141a214", 00:04:54.891 "assigned_rate_limits": { 00:04:54.891 "rw_ios_per_sec": 0, 00:04:54.891 "rw_mbytes_per_sec": 0, 00:04:54.891 "r_mbytes_per_sec": 0, 00:04:54.891 "w_mbytes_per_sec": 0 00:04:54.891 }, 00:04:54.891 "claimed": false, 00:04:54.891 "zoned": false, 00:04:54.891 "supported_io_types": { 00:04:54.891 "read": true, 00:04:54.891 "write": true, 00:04:54.891 "unmap": true, 00:04:54.891 "write_zeroes": true, 00:04:54.891 "flush": true, 00:04:54.891 "reset": true, 00:04:54.891 "compare": false, 00:04:54.891 "compare_and_write": false, 00:04:54.891 "abort": true, 00:04:54.891 "nvme_admin": false, 00:04:54.891 "nvme_io": false 00:04:54.891 }, 00:04:54.891 "memory_domains": [ 00:04:54.891 { 00:04:54.891 "dma_device_id": "system", 00:04:54.891 "dma_device_type": 1 00:04:54.891 }, 00:04:54.891 { 00:04:54.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.891 "dma_device_type": 2 00:04:54.891 } 00:04:54.891 ], 00:04:54.891 "driver_specific": {} 00:04:54.891 } 00:04:54.891 ]' 00:04:54.891 14:45:40 -- rpc/rpc.sh@17 -- # jq length 00:04:54.891 14:45:40 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:54.891 14:45:40 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:54.891 14:45:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.891 14:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:54.891 [2024-04-26 14:45:40.579625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:54.891 [2024-04-26 14:45:40.579669] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:54.891 [2024-04-26 14:45:40.579694] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f35b40 00:04:54.891 [2024-04-26 14:45:40.579712] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:54.891 [2024-04-26 14:45:40.581218] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:54.891 [2024-04-26 14:45:40.581244] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:54.891 Passthru0 00:04:54.891 14:45:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.891 14:45:40 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:54.891 14:45:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.891 14:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:54.891 14:45:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.891 14:45:40 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:54.891 { 00:04:54.891 "name": "Malloc0", 00:04:54.891 "aliases": [ 00:04:54.891 "2b804843-b3c5-4880-9542-ae65a141a214" 00:04:54.891 ], 00:04:54.891 "product_name": "Malloc disk", 00:04:54.891 "block_size": 512, 00:04:54.891 "num_blocks": 16384, 00:04:54.891 "uuid": "2b804843-b3c5-4880-9542-ae65a141a214", 00:04:54.891 "assigned_rate_limits": { 00:04:54.891 "rw_ios_per_sec": 0, 00:04:54.891 "rw_mbytes_per_sec": 0, 00:04:54.891 "r_mbytes_per_sec": 0, 00:04:54.891 "w_mbytes_per_sec": 0 00:04:54.891 }, 00:04:54.891 "claimed": true, 00:04:54.891 "claim_type": "exclusive_write", 00:04:54.891 "zoned": false, 00:04:54.891 "supported_io_types": { 00:04:54.891 "read": true, 00:04:54.891 "write": true, 00:04:54.891 "unmap": true, 00:04:54.891 "write_zeroes": true, 00:04:54.891 "flush": true, 00:04:54.891 "reset": true, 00:04:54.891 "compare": false, 00:04:54.891 "compare_and_write": false, 00:04:54.891 "abort": true, 00:04:54.891 "nvme_admin": false, 00:04:54.891 "nvme_io": false 00:04:54.891 }, 00:04:54.891 "memory_domains": [ 00:04:54.891 { 00:04:54.891 "dma_device_id": "system", 00:04:54.891 "dma_device_type": 1 00:04:54.891 }, 00:04:54.891 { 00:04:54.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.891 "dma_device_type": 2 00:04:54.891 } 00:04:54.891 ], 00:04:54.891 "driver_specific": {} 00:04:54.891 }, 00:04:54.891 { 00:04:54.891 "name": "Passthru0", 00:04:54.891 "aliases": [ 00:04:54.891 "d26118c7-85ed-5ba1-bc28-7b32dcc5f648" 00:04:54.891 ], 00:04:54.891 "product_name": "passthru", 00:04:54.891 "block_size": 512, 00:04:54.891 "num_blocks": 16384, 00:04:54.891 "uuid": "d26118c7-85ed-5ba1-bc28-7b32dcc5f648", 00:04:54.891 "assigned_rate_limits": { 00:04:54.891 "rw_ios_per_sec": 0, 00:04:54.891 "rw_mbytes_per_sec": 0, 00:04:54.891 "r_mbytes_per_sec": 0, 00:04:54.891 "w_mbytes_per_sec": 0 00:04:54.891 }, 00:04:54.891 "claimed": false, 00:04:54.891 "zoned": false, 00:04:54.891 "supported_io_types": { 00:04:54.891 "read": true, 00:04:54.891 "write": true, 00:04:54.891 "unmap": true, 00:04:54.891 "write_zeroes": true, 00:04:54.891 "flush": true, 00:04:54.891 "reset": true, 00:04:54.891 "compare": false, 00:04:54.891 "compare_and_write": false, 00:04:54.892 "abort": true, 00:04:54.892 "nvme_admin": false, 00:04:54.892 "nvme_io": false 00:04:54.892 }, 00:04:54.892 "memory_domains": [ 00:04:54.892 { 00:04:54.892 "dma_device_id": "system", 00:04:54.892 "dma_device_type": 1 00:04:54.892 }, 00:04:54.892 { 00:04:54.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.892 "dma_device_type": 2 00:04:54.892 } 00:04:54.892 ], 00:04:54.892 "driver_specific": { 00:04:54.892 "passthru": { 00:04:54.892 "name": "Passthru0", 00:04:54.892 "base_bdev_name": "Malloc0" 00:04:54.892 } 00:04:54.892 } 00:04:54.892 } 00:04:54.892 ]' 00:04:54.892 14:45:40 -- rpc/rpc.sh@21 -- # jq length 00:04:55.150 14:45:40 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:55.150 14:45:40 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:55.150 14:45:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.150 14:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:55.150 14:45:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.150 14:45:40 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:55.150 14:45:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.150 14:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:55.150 14:45:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.150 14:45:40 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:55.150 14:45:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.150 14:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:55.150 14:45:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.150 14:45:40 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:55.150 14:45:40 -- rpc/rpc.sh@26 -- # jq length 00:04:55.150 14:45:40 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:55.150 00:04:55.150 real 0m0.230s 00:04:55.150 user 0m0.154s 00:04:55.150 sys 0m0.017s 00:04:55.150 14:45:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:55.150 14:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:55.150 ************************************ 00:04:55.150 END TEST rpc_integrity 00:04:55.150 ************************************ 00:04:55.150 14:45:40 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:55.150 14:45:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.150 14:45:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.150 14:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:55.150 ************************************ 00:04:55.150 START TEST rpc_plugins 00:04:55.150 ************************************ 00:04:55.150 14:45:40 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:55.150 14:45:40 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:55.150 14:45:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.150 14:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:55.150 14:45:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.150 14:45:40 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:55.150 14:45:40 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:55.150 14:45:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.150 14:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:55.150 14:45:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.150 14:45:40 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:55.150 { 00:04:55.150 "name": "Malloc1", 00:04:55.150 "aliases": [ 00:04:55.150 "1db59791-6318-46a1-975c-a71b1b9893f4" 00:04:55.150 ], 00:04:55.150 "product_name": "Malloc disk", 00:04:55.150 "block_size": 4096, 00:04:55.150 "num_blocks": 256, 00:04:55.150 "uuid": "1db59791-6318-46a1-975c-a71b1b9893f4", 00:04:55.150 "assigned_rate_limits": { 00:04:55.150 "rw_ios_per_sec": 0, 00:04:55.150 "rw_mbytes_per_sec": 0, 00:04:55.150 "r_mbytes_per_sec": 0, 00:04:55.150 "w_mbytes_per_sec": 0 00:04:55.150 }, 00:04:55.150 "claimed": false, 00:04:55.150 "zoned": false, 00:04:55.150 "supported_io_types": { 00:04:55.150 "read": true, 00:04:55.150 "write": true, 00:04:55.150 "unmap": true, 00:04:55.150 "write_zeroes": true, 00:04:55.150 "flush": true, 00:04:55.150 "reset": true, 00:04:55.150 "compare": false, 00:04:55.150 "compare_and_write": false, 00:04:55.150 "abort": true, 00:04:55.150 "nvme_admin": false, 00:04:55.150 "nvme_io": false 00:04:55.150 }, 00:04:55.150 "memory_domains": [ 00:04:55.150 { 00:04:55.150 "dma_device_id": "system", 00:04:55.150 "dma_device_type": 1 00:04:55.150 }, 00:04:55.150 { 00:04:55.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.150 "dma_device_type": 2 00:04:55.150 } 00:04:55.150 ], 00:04:55.150 "driver_specific": {} 00:04:55.150 } 00:04:55.150 ]' 00:04:55.150 14:45:40 -- rpc/rpc.sh@32 -- # jq length 00:04:55.150 14:45:40 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:55.150 14:45:40 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:55.150 14:45:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.150 14:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:55.150 14:45:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.150 14:45:40 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:55.150 14:45:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.150 14:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:55.409 14:45:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.409 14:45:40 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:55.409 14:45:40 -- rpc/rpc.sh@36 -- # jq length 00:04:55.409 14:45:40 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:55.409 00:04:55.409 real 0m0.114s 00:04:55.409 user 0m0.073s 00:04:55.409 sys 0m0.011s 00:04:55.409 14:45:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:55.409 14:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:55.409 ************************************ 00:04:55.409 END TEST rpc_plugins 00:04:55.409 ************************************ 00:04:55.409 14:45:40 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:55.409 14:45:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.409 14:45:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.409 14:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:55.409 ************************************ 00:04:55.409 START TEST rpc_trace_cmd_test 00:04:55.409 ************************************ 00:04:55.409 14:45:41 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:55.409 14:45:41 -- rpc/rpc.sh@40 -- # local info 00:04:55.409 14:45:41 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:55.409 14:45:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.409 14:45:41 -- common/autotest_common.sh@10 -- # set +x 00:04:55.409 14:45:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.409 14:45:41 -- rpc/rpc.sh@42 -- # info='{ 00:04:55.409 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3647312", 00:04:55.409 "tpoint_group_mask": "0x8", 00:04:55.409 "iscsi_conn": { 00:04:55.409 "mask": "0x2", 00:04:55.409 "tpoint_mask": "0x0" 00:04:55.409 }, 00:04:55.409 "scsi": { 00:04:55.409 "mask": "0x4", 00:04:55.409 "tpoint_mask": "0x0" 00:04:55.409 }, 00:04:55.409 "bdev": { 00:04:55.409 "mask": "0x8", 00:04:55.409 "tpoint_mask": "0xffffffffffffffff" 00:04:55.409 }, 00:04:55.409 "nvmf_rdma": { 00:04:55.409 "mask": "0x10", 00:04:55.409 "tpoint_mask": "0x0" 00:04:55.409 }, 00:04:55.409 "nvmf_tcp": { 00:04:55.409 "mask": "0x20", 00:04:55.409 "tpoint_mask": "0x0" 00:04:55.409 }, 00:04:55.409 "ftl": { 00:04:55.409 "mask": "0x40", 00:04:55.409 "tpoint_mask": "0x0" 00:04:55.409 }, 00:04:55.409 "blobfs": { 00:04:55.409 "mask": "0x80", 00:04:55.409 "tpoint_mask": "0x0" 00:04:55.409 }, 00:04:55.409 "dsa": { 00:04:55.409 "mask": "0x200", 00:04:55.409 "tpoint_mask": "0x0" 00:04:55.409 }, 00:04:55.409 "thread": { 00:04:55.409 "mask": "0x400", 00:04:55.409 "tpoint_mask": "0x0" 00:04:55.409 }, 00:04:55.409 "nvme_pcie": { 00:04:55.409 "mask": "0x800", 00:04:55.409 "tpoint_mask": "0x0" 00:04:55.409 }, 00:04:55.409 "iaa": { 00:04:55.409 "mask": "0x1000", 00:04:55.409 "tpoint_mask": "0x0" 00:04:55.409 }, 00:04:55.409 "nvme_tcp": { 00:04:55.409 "mask": "0x2000", 00:04:55.409 "tpoint_mask": "0x0" 00:04:55.409 }, 00:04:55.409 "bdev_nvme": { 00:04:55.409 "mask": "0x4000", 00:04:55.409 "tpoint_mask": "0x0" 00:04:55.409 }, 00:04:55.409 "sock": { 00:04:55.409 "mask": "0x8000", 00:04:55.409 "tpoint_mask": "0x0" 00:04:55.409 } 00:04:55.409 }' 00:04:55.409 14:45:41 -- rpc/rpc.sh@43 -- # jq length 00:04:55.409 14:45:41 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:55.409 14:45:41 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:55.409 14:45:41 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:55.409 14:45:41 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:55.668 14:45:41 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:55.668 14:45:41 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:55.668 14:45:41 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:55.668 14:45:41 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:55.668 14:45:41 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:55.668 00:04:55.668 real 0m0.193s 00:04:55.668 user 0m0.168s 00:04:55.668 sys 0m0.017s 00:04:55.668 14:45:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:55.668 14:45:41 -- common/autotest_common.sh@10 -- # set +x 00:04:55.668 ************************************ 00:04:55.668 END TEST rpc_trace_cmd_test 00:04:55.668 ************************************ 00:04:55.668 14:45:41 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:55.668 14:45:41 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:55.668 14:45:41 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:55.668 14:45:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.668 14:45:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.668 14:45:41 -- common/autotest_common.sh@10 -- # set +x 00:04:55.668 ************************************ 00:04:55.668 START TEST rpc_daemon_integrity 00:04:55.668 ************************************ 00:04:55.668 14:45:41 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:55.668 14:45:41 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:55.668 14:45:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.668 14:45:41 -- common/autotest_common.sh@10 -- # set +x 00:04:55.668 14:45:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.668 14:45:41 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:55.668 14:45:41 -- rpc/rpc.sh@13 -- # jq length 00:04:55.668 14:45:41 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:55.927 14:45:41 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:55.927 14:45:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.927 14:45:41 -- common/autotest_common.sh@10 -- # set +x 00:04:55.927 14:45:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.927 14:45:41 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:55.927 14:45:41 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:55.927 14:45:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.927 14:45:41 -- common/autotest_common.sh@10 -- # set +x 00:04:55.927 14:45:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.927 14:45:41 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:55.927 { 00:04:55.927 "name": "Malloc2", 00:04:55.927 "aliases": [ 00:04:55.927 "39ab4a81-58cf-4d01-aafd-d920ac48573a" 00:04:55.927 ], 00:04:55.927 "product_name": "Malloc disk", 00:04:55.927 "block_size": 512, 00:04:55.927 "num_blocks": 16384, 00:04:55.927 "uuid": "39ab4a81-58cf-4d01-aafd-d920ac48573a", 00:04:55.927 "assigned_rate_limits": { 00:04:55.927 "rw_ios_per_sec": 0, 00:04:55.927 "rw_mbytes_per_sec": 0, 00:04:55.927 "r_mbytes_per_sec": 0, 00:04:55.927 "w_mbytes_per_sec": 0 00:04:55.927 }, 00:04:55.927 "claimed": false, 00:04:55.927 "zoned": false, 00:04:55.927 "supported_io_types": { 00:04:55.927 "read": true, 00:04:55.927 "write": true, 00:04:55.927 "unmap": true, 00:04:55.927 "write_zeroes": true, 00:04:55.927 "flush": true, 00:04:55.927 "reset": true, 00:04:55.927 "compare": false, 00:04:55.927 "compare_and_write": false, 00:04:55.927 "abort": true, 00:04:55.927 "nvme_admin": false, 00:04:55.927 "nvme_io": false 00:04:55.927 }, 00:04:55.927 "memory_domains": [ 00:04:55.927 { 00:04:55.927 "dma_device_id": "system", 00:04:55.927 "dma_device_type": 1 00:04:55.927 }, 00:04:55.927 { 00:04:55.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.927 "dma_device_type": 2 00:04:55.927 } 00:04:55.927 ], 00:04:55.927 "driver_specific": {} 00:04:55.927 } 00:04:55.927 ]' 00:04:55.927 14:45:41 -- rpc/rpc.sh@17 -- # jq length 00:04:55.927 14:45:41 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:55.927 14:45:41 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:55.927 14:45:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.927 14:45:41 -- common/autotest_common.sh@10 -- # set +x 00:04:55.927 [2024-04-26 14:45:41.470987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:55.927 [2024-04-26 14:45:41.471038] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:55.927 [2024-04-26 14:45:41.471090] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f36e20 00:04:55.927 [2024-04-26 14:45:41.471108] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:55.927 [2024-04-26 14:45:41.472435] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:55.927 [2024-04-26 14:45:41.472467] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:55.927 Passthru0 00:04:55.927 14:45:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.927 14:45:41 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:55.927 14:45:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.927 14:45:41 -- common/autotest_common.sh@10 -- # set +x 00:04:55.927 14:45:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.927 14:45:41 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:55.927 { 00:04:55.927 "name": "Malloc2", 00:04:55.927 "aliases": [ 00:04:55.927 "39ab4a81-58cf-4d01-aafd-d920ac48573a" 00:04:55.927 ], 00:04:55.927 "product_name": "Malloc disk", 00:04:55.927 "block_size": 512, 00:04:55.927 "num_blocks": 16384, 00:04:55.927 "uuid": "39ab4a81-58cf-4d01-aafd-d920ac48573a", 00:04:55.927 "assigned_rate_limits": { 00:04:55.927 "rw_ios_per_sec": 0, 00:04:55.927 "rw_mbytes_per_sec": 0, 00:04:55.927 "r_mbytes_per_sec": 0, 00:04:55.927 "w_mbytes_per_sec": 0 00:04:55.927 }, 00:04:55.927 "claimed": true, 00:04:55.927 "claim_type": "exclusive_write", 00:04:55.927 "zoned": false, 00:04:55.927 "supported_io_types": { 00:04:55.927 "read": true, 00:04:55.927 "write": true, 00:04:55.927 "unmap": true, 00:04:55.927 "write_zeroes": true, 00:04:55.927 "flush": true, 00:04:55.927 "reset": true, 00:04:55.927 "compare": false, 00:04:55.927 "compare_and_write": false, 00:04:55.927 "abort": true, 00:04:55.927 "nvme_admin": false, 00:04:55.927 "nvme_io": false 00:04:55.927 }, 00:04:55.927 "memory_domains": [ 00:04:55.927 { 00:04:55.927 "dma_device_id": "system", 00:04:55.927 "dma_device_type": 1 00:04:55.927 }, 00:04:55.927 { 00:04:55.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.927 "dma_device_type": 2 00:04:55.927 } 00:04:55.927 ], 00:04:55.927 "driver_specific": {} 00:04:55.927 }, 00:04:55.927 { 00:04:55.927 "name": "Passthru0", 00:04:55.927 "aliases": [ 00:04:55.927 "a18a0dba-ad4b-5aa6-a6d1-6baf438268cd" 00:04:55.927 ], 00:04:55.927 "product_name": "passthru", 00:04:55.927 "block_size": 512, 00:04:55.927 "num_blocks": 16384, 00:04:55.927 "uuid": "a18a0dba-ad4b-5aa6-a6d1-6baf438268cd", 00:04:55.927 "assigned_rate_limits": { 00:04:55.927 "rw_ios_per_sec": 0, 00:04:55.927 "rw_mbytes_per_sec": 0, 00:04:55.927 "r_mbytes_per_sec": 0, 00:04:55.927 "w_mbytes_per_sec": 0 00:04:55.927 }, 00:04:55.927 "claimed": false, 00:04:55.927 "zoned": false, 00:04:55.927 "supported_io_types": { 00:04:55.927 "read": true, 00:04:55.927 "write": true, 00:04:55.927 "unmap": true, 00:04:55.927 "write_zeroes": true, 00:04:55.927 "flush": true, 00:04:55.927 "reset": true, 00:04:55.927 "compare": false, 00:04:55.927 "compare_and_write": false, 00:04:55.927 "abort": true, 00:04:55.927 "nvme_admin": false, 00:04:55.927 "nvme_io": false 00:04:55.927 }, 00:04:55.927 "memory_domains": [ 00:04:55.927 { 00:04:55.927 "dma_device_id": "system", 00:04:55.927 "dma_device_type": 1 00:04:55.927 }, 00:04:55.927 { 00:04:55.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.927 "dma_device_type": 2 00:04:55.927 } 00:04:55.927 ], 00:04:55.927 "driver_specific": { 00:04:55.927 "passthru": { 00:04:55.927 "name": "Passthru0", 00:04:55.928 "base_bdev_name": "Malloc2" 00:04:55.928 } 00:04:55.928 } 00:04:55.928 } 00:04:55.928 ]' 00:04:55.928 14:45:41 -- rpc/rpc.sh@21 -- # jq length 00:04:55.928 14:45:41 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:55.928 14:45:41 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:55.928 14:45:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.928 14:45:41 -- common/autotest_common.sh@10 -- # set +x 00:04:55.928 14:45:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.928 14:45:41 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:55.928 14:45:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.928 14:45:41 -- common/autotest_common.sh@10 -- # set +x 00:04:55.928 14:45:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.928 14:45:41 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:55.928 14:45:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.928 14:45:41 -- common/autotest_common.sh@10 -- # set +x 00:04:55.928 14:45:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.928 14:45:41 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:55.928 14:45:41 -- rpc/rpc.sh@26 -- # jq length 00:04:55.928 14:45:41 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:55.928 00:04:55.928 real 0m0.232s 00:04:55.928 user 0m0.147s 00:04:55.928 sys 0m0.023s 00:04:55.928 14:45:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:55.928 14:45:41 -- common/autotest_common.sh@10 -- # set +x 00:04:55.928 ************************************ 00:04:55.928 END TEST rpc_daemon_integrity 00:04:55.928 ************************************ 00:04:55.928 14:45:41 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:55.928 14:45:41 -- rpc/rpc.sh@84 -- # killprocess 3647312 00:04:55.928 14:45:41 -- common/autotest_common.sh@936 -- # '[' -z 3647312 ']' 00:04:55.928 14:45:41 -- common/autotest_common.sh@940 -- # kill -0 3647312 00:04:55.928 14:45:41 -- common/autotest_common.sh@941 -- # uname 00:04:55.928 14:45:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:55.928 14:45:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3647312 00:04:55.928 14:45:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:55.928 14:45:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:55.928 14:45:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3647312' 00:04:55.928 killing process with pid 3647312 00:04:55.928 14:45:41 -- common/autotest_common.sh@955 -- # kill 3647312 00:04:55.928 14:45:41 -- common/autotest_common.sh@960 -- # wait 3647312 00:04:56.494 00:04:56.494 real 0m2.187s 00:04:56.494 user 0m2.770s 00:04:56.494 sys 0m0.728s 00:04:56.494 14:45:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:56.494 14:45:42 -- common/autotest_common.sh@10 -- # set +x 00:04:56.494 ************************************ 00:04:56.494 END TEST rpc 00:04:56.494 ************************************ 00:04:56.494 14:45:42 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:56.494 14:45:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.494 14:45:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.494 14:45:42 -- common/autotest_common.sh@10 -- # set +x 00:04:56.494 ************************************ 00:04:56.494 START TEST skip_rpc 00:04:56.494 ************************************ 00:04:56.494 14:45:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:56.494 * Looking for test storage... 00:04:56.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:56.494 14:45:42 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:56.494 14:45:42 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:56.494 14:45:42 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:56.494 14:45:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.494 14:45:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.753 14:45:42 -- common/autotest_common.sh@10 -- # set +x 00:04:56.753 ************************************ 00:04:56.753 START TEST skip_rpc 00:04:56.753 ************************************ 00:04:56.753 14:45:42 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:56.753 14:45:42 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3647792 00:04:56.753 14:45:42 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.753 14:45:42 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:56.753 14:45:42 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:56.753 [2024-04-26 14:45:42.374726] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:04:56.753 [2024-04-26 14:45:42.374793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647792 ] 00:04:56.753 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.753 [2024-04-26 14:45:42.417513] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:56.753 [2024-04-26 14:45:42.442933] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.011 [2024-04-26 14:45:42.532736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.300 14:45:47 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:02.300 14:45:47 -- common/autotest_common.sh@638 -- # local es=0 00:05:02.300 14:45:47 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:02.300 14:45:47 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:02.300 14:45:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:02.300 14:45:47 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:02.300 14:45:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:02.300 14:45:47 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:05:02.300 14:45:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:02.300 14:45:47 -- common/autotest_common.sh@10 -- # set +x 00:05:02.300 14:45:47 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:02.300 14:45:47 -- common/autotest_common.sh@641 -- # es=1 00:05:02.300 14:45:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:02.300 14:45:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:02.300 14:45:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:02.300 14:45:47 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:02.300 14:45:47 -- rpc/skip_rpc.sh@23 -- # killprocess 3647792 00:05:02.300 14:45:47 -- common/autotest_common.sh@936 -- # '[' -z 3647792 ']' 00:05:02.300 14:45:47 -- common/autotest_common.sh@940 -- # kill -0 3647792 00:05:02.300 14:45:47 -- common/autotest_common.sh@941 -- # uname 00:05:02.300 14:45:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:02.300 14:45:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3647792 00:05:02.300 14:45:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:02.300 14:45:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:02.300 14:45:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3647792' 00:05:02.300 killing process with pid 3647792 00:05:02.300 14:45:47 -- common/autotest_common.sh@955 -- # kill 3647792 00:05:02.300 14:45:47 -- common/autotest_common.sh@960 -- # wait 3647792 00:05:02.300 00:05:02.300 real 0m5.425s 00:05:02.300 user 0m5.091s 00:05:02.300 sys 0m0.337s 00:05:02.300 14:45:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:02.300 14:45:47 -- common/autotest_common.sh@10 -- # set +x 00:05:02.300 ************************************ 00:05:02.300 END TEST skip_rpc 00:05:02.300 ************************************ 00:05:02.300 14:45:47 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:02.300 14:45:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.300 14:45:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.300 14:45:47 -- common/autotest_common.sh@10 -- # set +x 00:05:02.300 ************************************ 00:05:02.300 START TEST skip_rpc_with_json 00:05:02.300 ************************************ 00:05:02.300 14:45:47 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:05:02.300 14:45:47 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:02.300 14:45:47 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3648493 00:05:02.300 14:45:47 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.300 14:45:47 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.300 14:45:47 -- rpc/skip_rpc.sh@31 -- # waitforlisten 3648493 00:05:02.300 14:45:47 -- common/autotest_common.sh@817 -- # '[' -z 3648493 ']' 00:05:02.300 14:45:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.300 14:45:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:02.301 14:45:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.301 14:45:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:02.301 14:45:47 -- common/autotest_common.sh@10 -- # set +x 00:05:02.301 [2024-04-26 14:45:47.920341] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:05:02.301 [2024-04-26 14:45:47.920437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648493 ] 00:05:02.301 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.301 [2024-04-26 14:45:47.953201] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:02.301 [2024-04-26 14:45:47.979413] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.559 [2024-04-26 14:45:48.065799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.817 14:45:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:02.817 14:45:48 -- common/autotest_common.sh@850 -- # return 0 00:05:02.817 14:45:48 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:02.817 14:45:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:02.817 14:45:48 -- common/autotest_common.sh@10 -- # set +x 00:05:02.817 [2024-04-26 14:45:48.329339] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:02.817 request: 00:05:02.817 { 00:05:02.817 "trtype": "tcp", 00:05:02.817 "method": "nvmf_get_transports", 00:05:02.817 "req_id": 1 00:05:02.817 } 00:05:02.817 Got JSON-RPC error response 00:05:02.817 response: 00:05:02.817 { 00:05:02.817 "code": -19, 00:05:02.817 "message": "No such device" 00:05:02.817 } 00:05:02.817 14:45:48 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:02.817 14:45:48 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:02.817 14:45:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:02.817 14:45:48 -- common/autotest_common.sh@10 -- # set +x 00:05:02.817 [2024-04-26 14:45:48.337464] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:02.817 14:45:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:02.817 14:45:48 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:02.817 14:45:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:02.817 14:45:48 -- common/autotest_common.sh@10 -- # set +x 00:05:02.817 14:45:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:02.817 14:45:48 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:02.817 { 00:05:02.817 "subsystems": [ 00:05:02.817 { 00:05:02.817 "subsystem": "vfio_user_target", 00:05:02.817 "config": null 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "subsystem": "keyring", 00:05:02.817 "config": [] 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "subsystem": "iobuf", 00:05:02.817 "config": [ 00:05:02.817 { 00:05:02.817 "method": "iobuf_set_options", 00:05:02.817 "params": { 00:05:02.817 "small_pool_count": 8192, 00:05:02.817 "large_pool_count": 1024, 00:05:02.817 "small_bufsize": 8192, 00:05:02.817 "large_bufsize": 135168 00:05:02.817 } 00:05:02.817 } 00:05:02.817 ] 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "subsystem": "sock", 00:05:02.817 "config": [ 00:05:02.817 { 00:05:02.817 "method": "sock_impl_set_options", 00:05:02.817 "params": { 00:05:02.817 "impl_name": "posix", 00:05:02.817 "recv_buf_size": 2097152, 00:05:02.817 "send_buf_size": 2097152, 00:05:02.817 "enable_recv_pipe": true, 00:05:02.817 "enable_quickack": false, 00:05:02.817 "enable_placement_id": 0, 00:05:02.817 "enable_zerocopy_send_server": true, 00:05:02.817 "enable_zerocopy_send_client": false, 00:05:02.817 "zerocopy_threshold": 0, 00:05:02.817 "tls_version": 0, 00:05:02.817 "enable_ktls": false 00:05:02.817 } 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "method": "sock_impl_set_options", 00:05:02.817 "params": { 00:05:02.817 "impl_name": "ssl", 00:05:02.817 "recv_buf_size": 4096, 00:05:02.817 "send_buf_size": 4096, 00:05:02.817 "enable_recv_pipe": true, 00:05:02.817 "enable_quickack": false, 00:05:02.817 "enable_placement_id": 0, 00:05:02.817 "enable_zerocopy_send_server": true, 00:05:02.817 "enable_zerocopy_send_client": false, 00:05:02.817 "zerocopy_threshold": 0, 00:05:02.817 "tls_version": 0, 00:05:02.817 "enable_ktls": false 00:05:02.817 } 00:05:02.817 } 00:05:02.817 ] 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "subsystem": "vmd", 00:05:02.817 "config": [] 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "subsystem": "accel", 00:05:02.817 "config": [ 00:05:02.817 { 00:05:02.817 "method": "accel_set_options", 00:05:02.817 "params": { 00:05:02.817 "small_cache_size": 128, 00:05:02.817 "large_cache_size": 16, 00:05:02.817 "task_count": 2048, 00:05:02.817 "sequence_count": 2048, 00:05:02.817 "buf_count": 2048 00:05:02.817 } 00:05:02.817 } 00:05:02.817 ] 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "subsystem": "bdev", 00:05:02.817 "config": [ 00:05:02.817 { 00:05:02.817 "method": "bdev_set_options", 00:05:02.817 "params": { 00:05:02.817 "bdev_io_pool_size": 65535, 00:05:02.817 "bdev_io_cache_size": 256, 00:05:02.817 "bdev_auto_examine": true, 00:05:02.817 "iobuf_small_cache_size": 128, 00:05:02.817 "iobuf_large_cache_size": 16 00:05:02.817 } 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "method": "bdev_raid_set_options", 00:05:02.817 "params": { 00:05:02.817 "process_window_size_kb": 1024 00:05:02.817 } 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "method": "bdev_iscsi_set_options", 00:05:02.817 "params": { 00:05:02.817 "timeout_sec": 30 00:05:02.817 } 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "method": "bdev_nvme_set_options", 00:05:02.817 "params": { 00:05:02.817 "action_on_timeout": "none", 00:05:02.817 "timeout_us": 0, 00:05:02.817 "timeout_admin_us": 0, 00:05:02.817 "keep_alive_timeout_ms": 10000, 00:05:02.817 "arbitration_burst": 0, 00:05:02.817 "low_priority_weight": 0, 00:05:02.817 "medium_priority_weight": 0, 00:05:02.817 "high_priority_weight": 0, 00:05:02.817 "nvme_adminq_poll_period_us": 10000, 00:05:02.817 "nvme_ioq_poll_period_us": 0, 00:05:02.817 "io_queue_requests": 0, 00:05:02.817 "delay_cmd_submit": true, 00:05:02.817 "transport_retry_count": 4, 00:05:02.817 "bdev_retry_count": 3, 00:05:02.817 "transport_ack_timeout": 0, 00:05:02.817 "ctrlr_loss_timeout_sec": 0, 00:05:02.817 "reconnect_delay_sec": 0, 00:05:02.817 "fast_io_fail_timeout_sec": 0, 00:05:02.817 "disable_auto_failback": false, 00:05:02.817 "generate_uuids": false, 00:05:02.817 "transport_tos": 0, 00:05:02.817 "nvme_error_stat": false, 00:05:02.817 "rdma_srq_size": 0, 00:05:02.817 "io_path_stat": false, 00:05:02.817 "allow_accel_sequence": false, 00:05:02.817 "rdma_max_cq_size": 0, 00:05:02.817 "rdma_cm_event_timeout_ms": 0, 00:05:02.817 "dhchap_digests": [ 00:05:02.817 "sha256", 00:05:02.817 "sha384", 00:05:02.817 "sha512" 00:05:02.817 ], 00:05:02.817 "dhchap_dhgroups": [ 00:05:02.817 "null", 00:05:02.817 "ffdhe2048", 00:05:02.817 "ffdhe3072", 00:05:02.817 "ffdhe4096", 00:05:02.817 "ffdhe6144", 00:05:02.817 "ffdhe8192" 00:05:02.817 ] 00:05:02.817 } 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "method": "bdev_nvme_set_hotplug", 00:05:02.817 "params": { 00:05:02.817 "period_us": 100000, 00:05:02.817 "enable": false 00:05:02.817 } 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "method": "bdev_wait_for_examine" 00:05:02.817 } 00:05:02.817 ] 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "subsystem": "scsi", 00:05:02.817 "config": null 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "subsystem": "scheduler", 00:05:02.817 "config": [ 00:05:02.817 { 00:05:02.817 "method": "framework_set_scheduler", 00:05:02.817 "params": { 00:05:02.817 "name": "static" 00:05:02.817 } 00:05:02.817 } 00:05:02.817 ] 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "subsystem": "vhost_scsi", 00:05:02.817 "config": [] 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "subsystem": "vhost_blk", 00:05:02.817 "config": [] 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "subsystem": "ublk", 00:05:02.817 "config": [] 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "subsystem": "nbd", 00:05:02.817 "config": [] 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "subsystem": "nvmf", 00:05:02.817 "config": [ 00:05:02.817 { 00:05:02.817 "method": "nvmf_set_config", 00:05:02.817 "params": { 00:05:02.817 "discovery_filter": "match_any", 00:05:02.817 "admin_cmd_passthru": { 00:05:02.817 "identify_ctrlr": false 00:05:02.817 } 00:05:02.817 } 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "method": "nvmf_set_max_subsystems", 00:05:02.817 "params": { 00:05:02.817 "max_subsystems": 1024 00:05:02.817 } 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "method": "nvmf_set_crdt", 00:05:02.817 "params": { 00:05:02.817 "crdt1": 0, 00:05:02.817 "crdt2": 0, 00:05:02.817 "crdt3": 0 00:05:02.817 } 00:05:02.817 }, 00:05:02.817 { 00:05:02.817 "method": "nvmf_create_transport", 00:05:02.818 "params": { 00:05:02.818 "trtype": "TCP", 00:05:02.818 "max_queue_depth": 128, 00:05:02.818 "max_io_qpairs_per_ctrlr": 127, 00:05:02.818 "in_capsule_data_size": 4096, 00:05:02.818 "max_io_size": 131072, 00:05:02.818 "io_unit_size": 131072, 00:05:02.818 "max_aq_depth": 128, 00:05:02.818 "num_shared_buffers": 511, 00:05:02.818 "buf_cache_size": 4294967295, 00:05:02.818 "dif_insert_or_strip": false, 00:05:02.818 "zcopy": false, 00:05:02.818 "c2h_success": true, 00:05:02.818 "sock_priority": 0, 00:05:02.818 "abort_timeout_sec": 1, 00:05:02.818 "ack_timeout": 0, 00:05:02.818 "data_wr_pool_size": 0 00:05:02.818 } 00:05:02.818 } 00:05:02.818 ] 00:05:02.818 }, 00:05:02.818 { 00:05:02.818 "subsystem": "iscsi", 00:05:02.818 "config": [ 00:05:02.818 { 00:05:02.818 "method": "iscsi_set_options", 00:05:02.818 "params": { 00:05:02.818 "node_base": "iqn.2016-06.io.spdk", 00:05:02.818 "max_sessions": 128, 00:05:02.818 "max_connections_per_session": 2, 00:05:02.818 "max_queue_depth": 64, 00:05:02.818 "default_time2wait": 2, 00:05:02.818 "default_time2retain": 20, 00:05:02.818 "first_burst_length": 8192, 00:05:02.818 "immediate_data": true, 00:05:02.818 "allow_duplicated_isid": false, 00:05:02.818 "error_recovery_level": 0, 00:05:02.818 "nop_timeout": 60, 00:05:02.818 "nop_in_interval": 30, 00:05:02.818 "disable_chap": false, 00:05:02.818 "require_chap": false, 00:05:02.818 "mutual_chap": false, 00:05:02.818 "chap_group": 0, 00:05:02.818 "max_large_datain_per_connection": 64, 00:05:02.818 "max_r2t_per_connection": 4, 00:05:02.818 "pdu_pool_size": 36864, 00:05:02.818 "immediate_data_pool_size": 16384, 00:05:02.818 "data_out_pool_size": 2048 00:05:02.818 } 00:05:02.818 } 00:05:02.818 ] 00:05:02.818 } 00:05:02.818 ] 00:05:02.818 } 00:05:02.818 14:45:48 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:02.818 14:45:48 -- rpc/skip_rpc.sh@40 -- # killprocess 3648493 00:05:02.818 14:45:48 -- common/autotest_common.sh@936 -- # '[' -z 3648493 ']' 00:05:02.818 14:45:48 -- common/autotest_common.sh@940 -- # kill -0 3648493 00:05:02.818 14:45:48 -- common/autotest_common.sh@941 -- # uname 00:05:02.818 14:45:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:02.818 14:45:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3648493 00:05:02.818 14:45:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:02.818 14:45:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:02.818 14:45:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3648493' 00:05:02.818 killing process with pid 3648493 00:05:02.818 14:45:48 -- common/autotest_common.sh@955 -- # kill 3648493 00:05:02.818 14:45:48 -- common/autotest_common.sh@960 -- # wait 3648493 00:05:03.383 14:45:48 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3648631 00:05:03.383 14:45:48 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:03.383 14:45:48 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:08.645 14:45:53 -- rpc/skip_rpc.sh@50 -- # killprocess 3648631 00:05:08.645 14:45:53 -- common/autotest_common.sh@936 -- # '[' -z 3648631 ']' 00:05:08.645 14:45:53 -- common/autotest_common.sh@940 -- # kill -0 3648631 00:05:08.645 14:45:53 -- common/autotest_common.sh@941 -- # uname 00:05:08.645 14:45:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:08.646 14:45:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3648631 00:05:08.646 14:45:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:08.646 14:45:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:08.646 14:45:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3648631' 00:05:08.646 killing process with pid 3648631 00:05:08.646 14:45:53 -- common/autotest_common.sh@955 -- # kill 3648631 00:05:08.646 14:45:53 -- common/autotest_common.sh@960 -- # wait 3648631 00:05:08.646 14:45:54 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:08.646 14:45:54 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:08.646 00:05:08.646 real 0m6.516s 00:05:08.646 user 0m6.090s 00:05:08.646 sys 0m0.708s 00:05:08.646 14:45:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:08.646 14:45:54 -- common/autotest_common.sh@10 -- # set +x 00:05:08.646 ************************************ 00:05:08.646 END TEST skip_rpc_with_json 00:05:08.646 ************************************ 00:05:08.904 14:45:54 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:08.904 14:45:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.904 14:45:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.904 14:45:54 -- common/autotest_common.sh@10 -- # set +x 00:05:08.904 ************************************ 00:05:08.904 START TEST skip_rpc_with_delay 00:05:08.904 ************************************ 00:05:08.904 14:45:54 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:05:08.904 14:45:54 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:08.904 14:45:54 -- common/autotest_common.sh@638 -- # local es=0 00:05:08.904 14:45:54 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:08.904 14:45:54 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.904 14:45:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:08.904 14:45:54 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.904 14:45:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:08.905 14:45:54 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.905 14:45:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:08.905 14:45:54 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.905 14:45:54 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:08.905 14:45:54 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:08.905 [2024-04-26 14:45:54.558596] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:08.905 [2024-04-26 14:45:54.558714] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:08.905 14:45:54 -- common/autotest_common.sh@641 -- # es=1 00:05:08.905 14:45:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:08.905 14:45:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:08.905 14:45:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:08.905 00:05:08.905 real 0m0.063s 00:05:08.905 user 0m0.042s 00:05:08.905 sys 0m0.021s 00:05:08.905 14:45:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:08.905 14:45:54 -- common/autotest_common.sh@10 -- # set +x 00:05:08.905 ************************************ 00:05:08.905 END TEST skip_rpc_with_delay 00:05:08.905 ************************************ 00:05:08.905 14:45:54 -- rpc/skip_rpc.sh@77 -- # uname 00:05:08.905 14:45:54 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:08.905 14:45:54 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:08.905 14:45:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.905 14:45:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.905 14:45:54 -- common/autotest_common.sh@10 -- # set +x 00:05:09.163 ************************************ 00:05:09.163 START TEST exit_on_failed_rpc_init 00:05:09.163 ************************************ 00:05:09.163 14:45:54 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:05:09.163 14:45:54 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3649363 00:05:09.163 14:45:54 -- rpc/skip_rpc.sh@63 -- # waitforlisten 3649363 00:05:09.163 14:45:54 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.163 14:45:54 -- common/autotest_common.sh@817 -- # '[' -z 3649363 ']' 00:05:09.163 14:45:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.163 14:45:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:09.163 14:45:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.163 14:45:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:09.163 14:45:54 -- common/autotest_common.sh@10 -- # set +x 00:05:09.163 [2024-04-26 14:45:54.741897] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:05:09.163 [2024-04-26 14:45:54.741992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649363 ] 00:05:09.163 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.164 [2024-04-26 14:45:54.775344] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:09.164 [2024-04-26 14:45:54.801803] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.164 [2024-04-26 14:45:54.887836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.423 14:45:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:09.423 14:45:55 -- common/autotest_common.sh@850 -- # return 0 00:05:09.423 14:45:55 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.423 14:45:55 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:09.423 14:45:55 -- common/autotest_common.sh@638 -- # local es=0 00:05:09.423 14:45:55 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:09.423 14:45:55 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.423 14:45:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:09.423 14:45:55 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.423 14:45:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:09.423 14:45:55 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.423 14:45:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:09.423 14:45:55 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.423 14:45:55 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:09.423 14:45:55 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:09.680 [2024-04-26 14:45:55.199642] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:05:09.681 [2024-04-26 14:45:55.199726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649376 ] 00:05:09.681 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.681 [2024-04-26 14:45:55.232295] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:09.681 [2024-04-26 14:45:55.262330] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.681 [2024-04-26 14:45:55.355075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.681 [2024-04-26 14:45:55.355226] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:09.681 [2024-04-26 14:45:55.355244] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:09.681 [2024-04-26 14:45:55.355257] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:09.938 14:45:55 -- common/autotest_common.sh@641 -- # es=234 00:05:09.938 14:45:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:09.938 14:45:55 -- common/autotest_common.sh@650 -- # es=106 00:05:09.938 14:45:55 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:09.938 14:45:55 -- common/autotest_common.sh@658 -- # es=1 00:05:09.938 14:45:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:09.938 14:45:55 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:09.938 14:45:55 -- rpc/skip_rpc.sh@70 -- # killprocess 3649363 00:05:09.938 14:45:55 -- common/autotest_common.sh@936 -- # '[' -z 3649363 ']' 00:05:09.938 14:45:55 -- common/autotest_common.sh@940 -- # kill -0 3649363 00:05:09.938 14:45:55 -- common/autotest_common.sh@941 -- # uname 00:05:09.938 14:45:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:09.938 14:45:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3649363 00:05:09.938 14:45:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:09.938 14:45:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:09.938 14:45:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3649363' 00:05:09.938 killing process with pid 3649363 00:05:09.938 14:45:55 -- common/autotest_common.sh@955 -- # kill 3649363 00:05:09.938 14:45:55 -- common/autotest_common.sh@960 -- # wait 3649363 00:05:10.196 00:05:10.196 real 0m1.166s 00:05:10.196 user 0m1.247s 00:05:10.196 sys 0m0.467s 00:05:10.196 14:45:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:10.196 14:45:55 -- common/autotest_common.sh@10 -- # set +x 00:05:10.196 ************************************ 00:05:10.196 END TEST exit_on_failed_rpc_init 00:05:10.196 ************************************ 00:05:10.196 14:45:55 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:10.196 00:05:10.196 real 0m13.702s 00:05:10.196 user 0m12.657s 00:05:10.196 sys 0m1.844s 00:05:10.196 14:45:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:10.196 14:45:55 -- common/autotest_common.sh@10 -- # set +x 00:05:10.196 ************************************ 00:05:10.196 END TEST skip_rpc 00:05:10.196 ************************************ 00:05:10.196 14:45:55 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:10.196 14:45:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.196 14:45:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.196 14:45:55 -- common/autotest_common.sh@10 -- # set +x 00:05:10.454 ************************************ 00:05:10.454 START TEST rpc_client 00:05:10.454 ************************************ 00:05:10.454 14:45:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:10.454 * Looking for test storage... 00:05:10.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:10.454 14:45:56 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:10.454 OK 00:05:10.454 14:45:56 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:10.454 00:05:10.454 real 0m0.069s 00:05:10.454 user 0m0.028s 00:05:10.454 sys 0m0.046s 00:05:10.454 14:45:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:10.454 14:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:10.454 ************************************ 00:05:10.454 END TEST rpc_client 00:05:10.454 ************************************ 00:05:10.454 14:45:56 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:10.455 14:45:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.455 14:45:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.455 14:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:10.455 ************************************ 00:05:10.455 START TEST json_config 00:05:10.455 ************************************ 00:05:10.455 14:45:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:10.713 14:45:56 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:10.713 14:45:56 -- nvmf/common.sh@7 -- # uname -s 00:05:10.713 14:45:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.713 14:45:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.713 14:45:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.713 14:45:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.713 14:45:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.713 14:45:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.713 14:45:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.713 14:45:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.713 14:45:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.713 14:45:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.713 14:45:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:10.713 14:45:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:10.713 14:45:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.713 14:45:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.713 14:45:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:10.713 14:45:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.713 14:45:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:10.713 14:45:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.713 14:45:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.713 14:45:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.713 14:45:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.713 14:45:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.713 14:45:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.713 14:45:56 -- paths/export.sh@5 -- # export PATH 00:05:10.713 14:45:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.713 14:45:56 -- nvmf/common.sh@47 -- # : 0 00:05:10.713 14:45:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:10.713 14:45:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:10.713 14:45:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.713 14:45:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.713 14:45:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.713 14:45:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:10.713 14:45:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:10.713 14:45:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:10.713 14:45:56 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:10.713 14:45:56 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:10.713 14:45:56 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:10.713 14:45:56 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:10.713 14:45:56 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:10.713 14:45:56 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:10.713 14:45:56 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:10.713 14:45:56 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:10.713 14:45:56 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:10.713 14:45:56 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:10.713 14:45:56 -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:10.713 14:45:56 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:10.713 14:45:56 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:10.713 14:45:56 -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:10.713 14:45:56 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:10.713 14:45:56 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:10.713 INFO: JSON configuration test init 00:05:10.713 14:45:56 -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:10.713 14:45:56 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:10.713 14:45:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:10.714 14:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:10.714 14:45:56 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:10.714 14:45:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:10.714 14:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:10.714 14:45:56 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:10.714 14:45:56 -- json_config/common.sh@9 -- # local app=target 00:05:10.714 14:45:56 -- json_config/common.sh@10 -- # shift 00:05:10.714 14:45:56 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:10.714 14:45:56 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:10.714 14:45:56 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:10.714 14:45:56 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.714 14:45:56 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.714 14:45:56 -- json_config/common.sh@22 -- # app_pid["$app"]=3649632 00:05:10.714 14:45:56 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:10.714 14:45:56 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:10.714 Waiting for target to run... 00:05:10.714 14:45:56 -- json_config/common.sh@25 -- # waitforlisten 3649632 /var/tmp/spdk_tgt.sock 00:05:10.714 14:45:56 -- common/autotest_common.sh@817 -- # '[' -z 3649632 ']' 00:05:10.714 14:45:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:10.714 14:45:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:10.714 14:45:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:10.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:10.714 14:45:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:10.714 14:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:10.714 [2024-04-26 14:45:56.302143] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:05:10.714 [2024-04-26 14:45:56.302224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649632 ] 00:05:10.714 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.279 [2024-04-26 14:45:56.777201] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:11.280 [2024-04-26 14:45:56.810162] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.280 [2024-04-26 14:45:56.889952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.537 14:45:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:11.537 14:45:57 -- common/autotest_common.sh@850 -- # return 0 00:05:11.537 14:45:57 -- json_config/common.sh@26 -- # echo '' 00:05:11.537 00:05:11.537 14:45:57 -- json_config/json_config.sh@269 -- # create_accel_config 00:05:11.537 14:45:57 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:11.537 14:45:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:11.537 14:45:57 -- common/autotest_common.sh@10 -- # set +x 00:05:11.537 14:45:57 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:11.537 14:45:57 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:11.537 14:45:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:11.537 14:45:57 -- common/autotest_common.sh@10 -- # set +x 00:05:11.795 14:45:57 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:11.795 14:45:57 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:11.795 14:45:57 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:15.075 14:46:00 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:15.075 14:46:00 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:15.075 14:46:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:15.075 14:46:00 -- common/autotest_common.sh@10 -- # set +x 00:05:15.075 14:46:00 -- json_config/json_config.sh@45 -- # local ret=0 00:05:15.075 14:46:00 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:15.075 14:46:00 -- json_config/json_config.sh@46 -- # local enabled_types 00:05:15.075 14:46:00 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:15.075 14:46:00 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:15.075 14:46:00 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:15.075 14:46:00 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:15.075 14:46:00 -- json_config/json_config.sh@48 -- # local get_types 00:05:15.075 14:46:00 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:15.075 14:46:00 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:15.075 14:46:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:15.075 14:46:00 -- common/autotest_common.sh@10 -- # set +x 00:05:15.075 14:46:00 -- json_config/json_config.sh@55 -- # return 0 00:05:15.075 14:46:00 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:15.075 14:46:00 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:15.075 14:46:00 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:15.075 14:46:00 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:15.075 14:46:00 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:15.075 14:46:00 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:15.075 14:46:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:15.075 14:46:00 -- common/autotest_common.sh@10 -- # set +x 00:05:15.075 14:46:00 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:15.075 14:46:00 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:15.075 14:46:00 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:15.075 14:46:00 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:15.075 14:46:00 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:15.332 MallocForNvmf0 00:05:15.332 14:46:00 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:15.332 14:46:00 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:15.590 MallocForNvmf1 00:05:15.590 14:46:01 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:15.590 14:46:01 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:15.848 [2024-04-26 14:46:01.409567] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:15.848 14:46:01 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:15.848 14:46:01 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:16.105 14:46:01 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:16.105 14:46:01 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:16.363 14:46:01 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:16.363 14:46:01 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:16.621 14:46:02 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:16.621 14:46:02 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:16.892 [2024-04-26 14:46:02.372806] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:16.892 14:46:02 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:16.892 14:46:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:16.892 14:46:02 -- common/autotest_common.sh@10 -- # set +x 00:05:16.892 14:46:02 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:16.892 14:46:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:16.892 14:46:02 -- common/autotest_common.sh@10 -- # set +x 00:05:16.892 14:46:02 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:16.892 14:46:02 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:16.892 14:46:02 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:17.151 MallocBdevForConfigChangeCheck 00:05:17.151 14:46:02 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:17.151 14:46:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:17.151 14:46:02 -- common/autotest_common.sh@10 -- # set +x 00:05:17.151 14:46:02 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:17.151 14:46:02 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:17.408 14:46:03 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:17.408 INFO: shutting down applications... 00:05:17.408 14:46:03 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:17.409 14:46:03 -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:17.409 14:46:03 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:17.409 14:46:03 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:19.306 Calling clear_iscsi_subsystem 00:05:19.306 Calling clear_nvmf_subsystem 00:05:19.306 Calling clear_nbd_subsystem 00:05:19.306 Calling clear_ublk_subsystem 00:05:19.306 Calling clear_vhost_blk_subsystem 00:05:19.306 Calling clear_vhost_scsi_subsystem 00:05:19.306 Calling clear_bdev_subsystem 00:05:19.306 14:46:04 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:19.306 14:46:04 -- json_config/json_config.sh@343 -- # count=100 00:05:19.306 14:46:04 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:19.306 14:46:04 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.306 14:46:04 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:19.306 14:46:04 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:19.563 14:46:05 -- json_config/json_config.sh@345 -- # break 00:05:19.563 14:46:05 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:19.563 14:46:05 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:19.563 14:46:05 -- json_config/common.sh@31 -- # local app=target 00:05:19.563 14:46:05 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:19.563 14:46:05 -- json_config/common.sh@35 -- # [[ -n 3649632 ]] 00:05:19.563 14:46:05 -- json_config/common.sh@38 -- # kill -SIGINT 3649632 00:05:19.563 14:46:05 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:19.563 14:46:05 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.563 14:46:05 -- json_config/common.sh@41 -- # kill -0 3649632 00:05:19.563 14:46:05 -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.130 14:46:05 -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.130 14:46:05 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.130 14:46:05 -- json_config/common.sh@41 -- # kill -0 3649632 00:05:20.130 14:46:05 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:20.130 14:46:05 -- json_config/common.sh@43 -- # break 00:05:20.130 14:46:05 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:20.130 14:46:05 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:20.130 SPDK target shutdown done 00:05:20.130 14:46:05 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:20.130 INFO: relaunching applications... 00:05:20.130 14:46:05 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.130 14:46:05 -- json_config/common.sh@9 -- # local app=target 00:05:20.130 14:46:05 -- json_config/common.sh@10 -- # shift 00:05:20.130 14:46:05 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:20.130 14:46:05 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:20.130 14:46:05 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:20.130 14:46:05 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.130 14:46:05 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.130 14:46:05 -- json_config/common.sh@22 -- # app_pid["$app"]=3650938 00:05:20.130 14:46:05 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:20.130 14:46:05 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.130 Waiting for target to run... 00:05:20.130 14:46:05 -- json_config/common.sh@25 -- # waitforlisten 3650938 /var/tmp/spdk_tgt.sock 00:05:20.130 14:46:05 -- common/autotest_common.sh@817 -- # '[' -z 3650938 ']' 00:05:20.130 14:46:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:20.130 14:46:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:20.130 14:46:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:20.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:20.130 14:46:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:20.130 14:46:05 -- common/autotest_common.sh@10 -- # set +x 00:05:20.130 [2024-04-26 14:46:05.666185] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:05:20.130 [2024-04-26 14:46:05.666263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650938 ] 00:05:20.130 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.696 [2024-04-26 14:46:06.148187] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:20.696 [2024-04-26 14:46:06.182354] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.696 [2024-04-26 14:46:06.261889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.016 [2024-04-26 14:46:09.287009] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.016 [2024-04-26 14:46:09.319518] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:24.583 14:46:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:24.583 14:46:10 -- common/autotest_common.sh@850 -- # return 0 00:05:24.583 14:46:10 -- json_config/common.sh@26 -- # echo '' 00:05:24.583 00:05:24.583 14:46:10 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:24.583 14:46:10 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:24.583 INFO: Checking if target configuration is the same... 00:05:24.583 14:46:10 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.583 14:46:10 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:24.583 14:46:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.583 + '[' 2 -ne 2 ']' 00:05:24.583 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:24.583 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:24.583 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:24.583 +++ basename /dev/fd/62 00:05:24.583 ++ mktemp /tmp/62.XXX 00:05:24.583 + tmp_file_1=/tmp/62.Gc1 00:05:24.583 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.583 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:24.583 + tmp_file_2=/tmp/spdk_tgt_config.json.5UV 00:05:24.583 + ret=0 00:05:24.583 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.841 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.841 + diff -u /tmp/62.Gc1 /tmp/spdk_tgt_config.json.5UV 00:05:24.841 + echo 'INFO: JSON config files are the same' 00:05:24.841 INFO: JSON config files are the same 00:05:24.841 + rm /tmp/62.Gc1 /tmp/spdk_tgt_config.json.5UV 00:05:24.841 + exit 0 00:05:24.841 14:46:10 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:24.841 14:46:10 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:24.841 INFO: changing configuration and checking if this can be detected... 00:05:24.841 14:46:10 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:24.841 14:46:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:25.099 14:46:10 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.099 14:46:10 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:25.099 14:46:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:25.099 + '[' 2 -ne 2 ']' 00:05:25.099 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:25.099 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:25.099 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:25.099 +++ basename /dev/fd/62 00:05:25.099 ++ mktemp /tmp/62.XXX 00:05:25.100 + tmp_file_1=/tmp/62.zFK 00:05:25.100 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.100 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:25.100 + tmp_file_2=/tmp/spdk_tgt_config.json.qjO 00:05:25.100 + ret=0 00:05:25.100 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:25.358 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:25.616 + diff -u /tmp/62.zFK /tmp/spdk_tgt_config.json.qjO 00:05:25.616 + ret=1 00:05:25.616 + echo '=== Start of file: /tmp/62.zFK ===' 00:05:25.616 + cat /tmp/62.zFK 00:05:25.616 + echo '=== End of file: /tmp/62.zFK ===' 00:05:25.616 + echo '' 00:05:25.616 + echo '=== Start of file: /tmp/spdk_tgt_config.json.qjO ===' 00:05:25.616 + cat /tmp/spdk_tgt_config.json.qjO 00:05:25.616 + echo '=== End of file: /tmp/spdk_tgt_config.json.qjO ===' 00:05:25.616 + echo '' 00:05:25.616 + rm /tmp/62.zFK /tmp/spdk_tgt_config.json.qjO 00:05:25.616 + exit 1 00:05:25.616 14:46:11 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:25.616 INFO: configuration change detected. 00:05:25.616 14:46:11 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:25.616 14:46:11 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:25.616 14:46:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:25.616 14:46:11 -- common/autotest_common.sh@10 -- # set +x 00:05:25.616 14:46:11 -- json_config/json_config.sh@307 -- # local ret=0 00:05:25.616 14:46:11 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:25.616 14:46:11 -- json_config/json_config.sh@317 -- # [[ -n 3650938 ]] 00:05:25.616 14:46:11 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:25.616 14:46:11 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:25.616 14:46:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:25.616 14:46:11 -- common/autotest_common.sh@10 -- # set +x 00:05:25.616 14:46:11 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:25.616 14:46:11 -- json_config/json_config.sh@193 -- # uname -s 00:05:25.616 14:46:11 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:25.617 14:46:11 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:25.617 14:46:11 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:25.617 14:46:11 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:25.617 14:46:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:25.617 14:46:11 -- common/autotest_common.sh@10 -- # set +x 00:05:25.617 14:46:11 -- json_config/json_config.sh@323 -- # killprocess 3650938 00:05:25.617 14:46:11 -- common/autotest_common.sh@936 -- # '[' -z 3650938 ']' 00:05:25.617 14:46:11 -- common/autotest_common.sh@940 -- # kill -0 3650938 00:05:25.617 14:46:11 -- common/autotest_common.sh@941 -- # uname 00:05:25.617 14:46:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:25.617 14:46:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3650938 00:05:25.617 14:46:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:25.617 14:46:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:25.617 14:46:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3650938' 00:05:25.617 killing process with pid 3650938 00:05:25.617 14:46:11 -- common/autotest_common.sh@955 -- # kill 3650938 00:05:25.617 14:46:11 -- common/autotest_common.sh@960 -- # wait 3650938 00:05:27.515 14:46:12 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.515 14:46:12 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:27.515 14:46:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:27.515 14:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:27.515 14:46:12 -- json_config/json_config.sh@328 -- # return 0 00:05:27.515 14:46:12 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:27.515 INFO: Success 00:05:27.515 00:05:27.515 real 0m16.665s 00:05:27.515 user 0m18.354s 00:05:27.515 sys 0m2.214s 00:05:27.515 14:46:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:27.515 14:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:27.515 ************************************ 00:05:27.515 END TEST json_config 00:05:27.515 ************************************ 00:05:27.515 14:46:12 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:27.515 14:46:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.515 14:46:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.515 14:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:27.515 ************************************ 00:05:27.515 START TEST json_config_extra_key 00:05:27.515 ************************************ 00:05:27.515 14:46:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:27.515 14:46:13 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:27.515 14:46:13 -- nvmf/common.sh@7 -- # uname -s 00:05:27.515 14:46:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.515 14:46:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.515 14:46:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.515 14:46:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.515 14:46:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.515 14:46:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.515 14:46:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.515 14:46:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.515 14:46:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.516 14:46:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.516 14:46:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:27.516 14:46:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:27.516 14:46:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.516 14:46:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.516 14:46:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:27.516 14:46:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.516 14:46:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:27.516 14:46:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.516 14:46:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.516 14:46:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.516 14:46:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.516 14:46:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.516 14:46:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.516 14:46:13 -- paths/export.sh@5 -- # export PATH 00:05:27.516 14:46:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.516 14:46:13 -- nvmf/common.sh@47 -- # : 0 00:05:27.516 14:46:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:27.516 14:46:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:27.516 14:46:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.516 14:46:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.516 14:46:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.516 14:46:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:27.516 14:46:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:27.516 14:46:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:27.516 14:46:13 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:27.516 14:46:13 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:27.516 14:46:13 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:27.516 14:46:13 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:27.516 14:46:13 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:27.516 14:46:13 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:27.516 14:46:13 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:27.516 14:46:13 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:27.516 14:46:13 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:27.516 14:46:13 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:27.516 14:46:13 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:27.516 INFO: launching applications... 00:05:27.516 14:46:13 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:27.516 14:46:13 -- json_config/common.sh@9 -- # local app=target 00:05:27.516 14:46:13 -- json_config/common.sh@10 -- # shift 00:05:27.516 14:46:13 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:27.516 14:46:13 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:27.516 14:46:13 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:27.516 14:46:13 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.516 14:46:13 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.516 14:46:13 -- json_config/common.sh@22 -- # app_pid["$app"]=3651879 00:05:27.516 14:46:13 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:27.516 14:46:13 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:27.516 Waiting for target to run... 00:05:27.516 14:46:13 -- json_config/common.sh@25 -- # waitforlisten 3651879 /var/tmp/spdk_tgt.sock 00:05:27.516 14:46:13 -- common/autotest_common.sh@817 -- # '[' -z 3651879 ']' 00:05:27.516 14:46:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:27.516 14:46:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:27.516 14:46:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:27.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:27.516 14:46:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:27.516 14:46:13 -- common/autotest_common.sh@10 -- # set +x 00:05:27.516 [2024-04-26 14:46:13.073922] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:05:27.516 [2024-04-26 14:46:13.074000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3651879 ] 00:05:27.516 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.081 [2024-04-26 14:46:13.535341] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:28.081 [2024-04-26 14:46:13.569471] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.081 [2024-04-26 14:46:13.649637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.339 14:46:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:28.339 14:46:13 -- common/autotest_common.sh@850 -- # return 0 00:05:28.339 14:46:13 -- json_config/common.sh@26 -- # echo '' 00:05:28.339 00:05:28.339 14:46:13 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:28.339 INFO: shutting down applications... 00:05:28.339 14:46:13 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:28.339 14:46:13 -- json_config/common.sh@31 -- # local app=target 00:05:28.339 14:46:13 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:28.339 14:46:13 -- json_config/common.sh@35 -- # [[ -n 3651879 ]] 00:05:28.339 14:46:13 -- json_config/common.sh@38 -- # kill -SIGINT 3651879 00:05:28.339 14:46:13 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:28.339 14:46:13 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.339 14:46:13 -- json_config/common.sh@41 -- # kill -0 3651879 00:05:28.339 14:46:13 -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.905 14:46:14 -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.905 14:46:14 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.905 14:46:14 -- json_config/common.sh@41 -- # kill -0 3651879 00:05:28.905 14:46:14 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:28.905 14:46:14 -- json_config/common.sh@43 -- # break 00:05:28.905 14:46:14 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:28.905 14:46:14 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:28.905 SPDK target shutdown done 00:05:28.905 14:46:14 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:28.905 Success 00:05:28.905 00:05:28.905 real 0m1.526s 00:05:28.905 user 0m1.321s 00:05:28.905 sys 0m0.587s 00:05:28.905 14:46:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:28.905 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:28.905 ************************************ 00:05:28.905 END TEST json_config_extra_key 00:05:28.905 ************************************ 00:05:28.905 14:46:14 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:28.905 14:46:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.905 14:46:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.905 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:28.905 ************************************ 00:05:28.905 START TEST alias_rpc 00:05:28.905 ************************************ 00:05:28.905 14:46:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:29.164 * Looking for test storage... 00:05:29.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:29.164 14:46:14 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:29.164 14:46:14 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3652192 00:05:29.164 14:46:14 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.164 14:46:14 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3652192 00:05:29.164 14:46:14 -- common/autotest_common.sh@817 -- # '[' -z 3652192 ']' 00:05:29.164 14:46:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.164 14:46:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:29.164 14:46:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.164 14:46:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:29.164 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:29.164 [2024-04-26 14:46:14.724638] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:05:29.164 [2024-04-26 14:46:14.724734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652192 ] 00:05:29.164 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.164 [2024-04-26 14:46:14.756584] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:29.164 [2024-04-26 14:46:14.782506] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.164 [2024-04-26 14:46:14.864725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.421 14:46:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:29.421 14:46:15 -- common/autotest_common.sh@850 -- # return 0 00:05:29.421 14:46:15 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:29.679 14:46:15 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3652192 00:05:29.679 14:46:15 -- common/autotest_common.sh@936 -- # '[' -z 3652192 ']' 00:05:29.679 14:46:15 -- common/autotest_common.sh@940 -- # kill -0 3652192 00:05:29.679 14:46:15 -- common/autotest_common.sh@941 -- # uname 00:05:29.679 14:46:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:29.679 14:46:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3652192 00:05:29.679 14:46:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:29.679 14:46:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:29.679 14:46:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3652192' 00:05:29.679 killing process with pid 3652192 00:05:29.679 14:46:15 -- common/autotest_common.sh@955 -- # kill 3652192 00:05:29.679 14:46:15 -- common/autotest_common.sh@960 -- # wait 3652192 00:05:30.244 00:05:30.244 real 0m1.176s 00:05:30.244 user 0m1.233s 00:05:30.244 sys 0m0.422s 00:05:30.244 14:46:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:30.244 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.244 ************************************ 00:05:30.244 END TEST alias_rpc 00:05:30.244 ************************************ 00:05:30.244 14:46:15 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:30.244 14:46:15 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:30.244 14:46:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.244 14:46:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.244 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.244 ************************************ 00:05:30.244 START TEST spdkcli_tcp 00:05:30.244 ************************************ 00:05:30.244 14:46:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:30.244 * Looking for test storage... 00:05:30.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:30.244 14:46:15 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:30.244 14:46:15 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:30.244 14:46:15 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:30.244 14:46:15 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:30.244 14:46:15 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:30.244 14:46:15 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:30.244 14:46:15 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:30.244 14:46:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:30.244 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.244 14:46:15 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3652384 00:05:30.244 14:46:15 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:30.244 14:46:15 -- spdkcli/tcp.sh@27 -- # waitforlisten 3652384 00:05:30.244 14:46:15 -- common/autotest_common.sh@817 -- # '[' -z 3652384 ']' 00:05:30.244 14:46:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.244 14:46:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:30.244 14:46:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.244 14:46:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:30.244 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.502 [2024-04-26 14:46:16.025209] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:05:30.502 [2024-04-26 14:46:16.025284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652384 ] 00:05:30.502 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.502 [2024-04-26 14:46:16.055911] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:30.502 [2024-04-26 14:46:16.082811] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.502 [2024-04-26 14:46:16.165826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.502 [2024-04-26 14:46:16.165830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.760 14:46:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:30.760 14:46:16 -- common/autotest_common.sh@850 -- # return 0 00:05:30.760 14:46:16 -- spdkcli/tcp.sh@31 -- # socat_pid=3652396 00:05:30.760 14:46:16 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:30.760 14:46:16 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:31.018 [ 00:05:31.018 "bdev_malloc_delete", 00:05:31.018 "bdev_malloc_create", 00:05:31.018 "bdev_null_resize", 00:05:31.018 "bdev_null_delete", 00:05:31.018 "bdev_null_create", 00:05:31.018 "bdev_nvme_cuse_unregister", 00:05:31.018 "bdev_nvme_cuse_register", 00:05:31.018 "bdev_opal_new_user", 00:05:31.018 "bdev_opal_set_lock_state", 00:05:31.018 "bdev_opal_delete", 00:05:31.018 "bdev_opal_get_info", 00:05:31.018 "bdev_opal_create", 00:05:31.018 "bdev_nvme_opal_revert", 00:05:31.018 "bdev_nvme_opal_init", 00:05:31.018 "bdev_nvme_send_cmd", 00:05:31.018 "bdev_nvme_get_path_iostat", 00:05:31.018 "bdev_nvme_get_mdns_discovery_info", 00:05:31.018 "bdev_nvme_stop_mdns_discovery", 00:05:31.018 "bdev_nvme_start_mdns_discovery", 00:05:31.018 "bdev_nvme_set_multipath_policy", 00:05:31.018 "bdev_nvme_set_preferred_path", 00:05:31.018 "bdev_nvme_get_io_paths", 00:05:31.018 "bdev_nvme_remove_error_injection", 00:05:31.018 "bdev_nvme_add_error_injection", 00:05:31.018 "bdev_nvme_get_discovery_info", 00:05:31.018 "bdev_nvme_stop_discovery", 00:05:31.018 "bdev_nvme_start_discovery", 00:05:31.018 "bdev_nvme_get_controller_health_info", 00:05:31.018 "bdev_nvme_disable_controller", 00:05:31.018 "bdev_nvme_enable_controller", 00:05:31.018 "bdev_nvme_reset_controller", 00:05:31.018 "bdev_nvme_get_transport_statistics", 00:05:31.018 "bdev_nvme_apply_firmware", 00:05:31.018 "bdev_nvme_detach_controller", 00:05:31.018 "bdev_nvme_get_controllers", 00:05:31.018 "bdev_nvme_attach_controller", 00:05:31.018 "bdev_nvme_set_hotplug", 00:05:31.018 "bdev_nvme_set_options", 00:05:31.018 "bdev_passthru_delete", 00:05:31.019 "bdev_passthru_create", 00:05:31.019 "bdev_lvol_grow_lvstore", 00:05:31.019 "bdev_lvol_get_lvols", 00:05:31.019 "bdev_lvol_get_lvstores", 00:05:31.019 "bdev_lvol_delete", 00:05:31.019 "bdev_lvol_set_read_only", 00:05:31.019 "bdev_lvol_resize", 00:05:31.019 "bdev_lvol_decouple_parent", 00:05:31.019 "bdev_lvol_inflate", 00:05:31.019 "bdev_lvol_rename", 00:05:31.019 "bdev_lvol_clone_bdev", 00:05:31.019 "bdev_lvol_clone", 00:05:31.019 "bdev_lvol_snapshot", 00:05:31.019 "bdev_lvol_create", 00:05:31.019 "bdev_lvol_delete_lvstore", 00:05:31.019 "bdev_lvol_rename_lvstore", 00:05:31.019 "bdev_lvol_create_lvstore", 00:05:31.019 "bdev_raid_set_options", 00:05:31.019 "bdev_raid_remove_base_bdev", 00:05:31.019 "bdev_raid_add_base_bdev", 00:05:31.019 "bdev_raid_delete", 00:05:31.019 "bdev_raid_create", 00:05:31.019 "bdev_raid_get_bdevs", 00:05:31.019 "bdev_error_inject_error", 00:05:31.019 "bdev_error_delete", 00:05:31.019 "bdev_error_create", 00:05:31.019 "bdev_split_delete", 00:05:31.019 "bdev_split_create", 00:05:31.019 "bdev_delay_delete", 00:05:31.019 "bdev_delay_create", 00:05:31.019 "bdev_delay_update_latency", 00:05:31.019 "bdev_zone_block_delete", 00:05:31.019 "bdev_zone_block_create", 00:05:31.019 "blobfs_create", 00:05:31.019 "blobfs_detect", 00:05:31.019 "blobfs_set_cache_size", 00:05:31.019 "bdev_aio_delete", 00:05:31.019 "bdev_aio_rescan", 00:05:31.019 "bdev_aio_create", 00:05:31.019 "bdev_ftl_set_property", 00:05:31.019 "bdev_ftl_get_properties", 00:05:31.019 "bdev_ftl_get_stats", 00:05:31.019 "bdev_ftl_unmap", 00:05:31.019 "bdev_ftl_unload", 00:05:31.019 "bdev_ftl_delete", 00:05:31.019 "bdev_ftl_load", 00:05:31.019 "bdev_ftl_create", 00:05:31.019 "bdev_virtio_attach_controller", 00:05:31.019 "bdev_virtio_scsi_get_devices", 00:05:31.019 "bdev_virtio_detach_controller", 00:05:31.019 "bdev_virtio_blk_set_hotplug", 00:05:31.019 "bdev_iscsi_delete", 00:05:31.019 "bdev_iscsi_create", 00:05:31.019 "bdev_iscsi_set_options", 00:05:31.019 "accel_error_inject_error", 00:05:31.019 "ioat_scan_accel_module", 00:05:31.019 "dsa_scan_accel_module", 00:05:31.019 "iaa_scan_accel_module", 00:05:31.019 "vfu_virtio_create_scsi_endpoint", 00:05:31.019 "vfu_virtio_scsi_remove_target", 00:05:31.019 "vfu_virtio_scsi_add_target", 00:05:31.019 "vfu_virtio_create_blk_endpoint", 00:05:31.019 "vfu_virtio_delete_endpoint", 00:05:31.019 "keyring_file_remove_key", 00:05:31.019 "keyring_file_add_key", 00:05:31.019 "iscsi_get_histogram", 00:05:31.019 "iscsi_enable_histogram", 00:05:31.019 "iscsi_set_options", 00:05:31.019 "iscsi_get_auth_groups", 00:05:31.019 "iscsi_auth_group_remove_secret", 00:05:31.019 "iscsi_auth_group_add_secret", 00:05:31.019 "iscsi_delete_auth_group", 00:05:31.019 "iscsi_create_auth_group", 00:05:31.019 "iscsi_set_discovery_auth", 00:05:31.019 "iscsi_get_options", 00:05:31.019 "iscsi_target_node_request_logout", 00:05:31.019 "iscsi_target_node_set_redirect", 00:05:31.019 "iscsi_target_node_set_auth", 00:05:31.019 "iscsi_target_node_add_lun", 00:05:31.019 "iscsi_get_stats", 00:05:31.019 "iscsi_get_connections", 00:05:31.019 "iscsi_portal_group_set_auth", 00:05:31.019 "iscsi_start_portal_group", 00:05:31.019 "iscsi_delete_portal_group", 00:05:31.019 "iscsi_create_portal_group", 00:05:31.019 "iscsi_get_portal_groups", 00:05:31.019 "iscsi_delete_target_node", 00:05:31.019 "iscsi_target_node_remove_pg_ig_maps", 00:05:31.019 "iscsi_target_node_add_pg_ig_maps", 00:05:31.019 "iscsi_create_target_node", 00:05:31.019 "iscsi_get_target_nodes", 00:05:31.019 "iscsi_delete_initiator_group", 00:05:31.019 "iscsi_initiator_group_remove_initiators", 00:05:31.019 "iscsi_initiator_group_add_initiators", 00:05:31.019 "iscsi_create_initiator_group", 00:05:31.019 "iscsi_get_initiator_groups", 00:05:31.019 "nvmf_set_crdt", 00:05:31.019 "nvmf_set_config", 00:05:31.019 "nvmf_set_max_subsystems", 00:05:31.019 "nvmf_subsystem_get_listeners", 00:05:31.019 "nvmf_subsystem_get_qpairs", 00:05:31.019 "nvmf_subsystem_get_controllers", 00:05:31.019 "nvmf_get_stats", 00:05:31.019 "nvmf_get_transports", 00:05:31.019 "nvmf_create_transport", 00:05:31.019 "nvmf_get_targets", 00:05:31.019 "nvmf_delete_target", 00:05:31.019 "nvmf_create_target", 00:05:31.019 "nvmf_subsystem_allow_any_host", 00:05:31.019 "nvmf_subsystem_remove_host", 00:05:31.019 "nvmf_subsystem_add_host", 00:05:31.019 "nvmf_ns_remove_host", 00:05:31.019 "nvmf_ns_add_host", 00:05:31.019 "nvmf_subsystem_remove_ns", 00:05:31.019 "nvmf_subsystem_add_ns", 00:05:31.019 "nvmf_subsystem_listener_set_ana_state", 00:05:31.019 "nvmf_discovery_get_referrals", 00:05:31.019 "nvmf_discovery_remove_referral", 00:05:31.019 "nvmf_discovery_add_referral", 00:05:31.019 "nvmf_subsystem_remove_listener", 00:05:31.019 "nvmf_subsystem_add_listener", 00:05:31.019 "nvmf_delete_subsystem", 00:05:31.019 "nvmf_create_subsystem", 00:05:31.019 "nvmf_get_subsystems", 00:05:31.019 "env_dpdk_get_mem_stats", 00:05:31.019 "nbd_get_disks", 00:05:31.019 "nbd_stop_disk", 00:05:31.019 "nbd_start_disk", 00:05:31.019 "ublk_recover_disk", 00:05:31.019 "ublk_get_disks", 00:05:31.019 "ublk_stop_disk", 00:05:31.019 "ublk_start_disk", 00:05:31.019 "ublk_destroy_target", 00:05:31.019 "ublk_create_target", 00:05:31.019 "virtio_blk_create_transport", 00:05:31.019 "virtio_blk_get_transports", 00:05:31.019 "vhost_controller_set_coalescing", 00:05:31.019 "vhost_get_controllers", 00:05:31.019 "vhost_delete_controller", 00:05:31.019 "vhost_create_blk_controller", 00:05:31.019 "vhost_scsi_controller_remove_target", 00:05:31.019 "vhost_scsi_controller_add_target", 00:05:31.019 "vhost_start_scsi_controller", 00:05:31.019 "vhost_create_scsi_controller", 00:05:31.019 "thread_set_cpumask", 00:05:31.019 "framework_get_scheduler", 00:05:31.019 "framework_set_scheduler", 00:05:31.019 "framework_get_reactors", 00:05:31.019 "thread_get_io_channels", 00:05:31.019 "thread_get_pollers", 00:05:31.019 "thread_get_stats", 00:05:31.019 "framework_monitor_context_switch", 00:05:31.019 "spdk_kill_instance", 00:05:31.019 "log_enable_timestamps", 00:05:31.019 "log_get_flags", 00:05:31.019 "log_clear_flag", 00:05:31.019 "log_set_flag", 00:05:31.019 "log_get_level", 00:05:31.019 "log_set_level", 00:05:31.019 "log_get_print_level", 00:05:31.019 "log_set_print_level", 00:05:31.019 "framework_enable_cpumask_locks", 00:05:31.019 "framework_disable_cpumask_locks", 00:05:31.019 "framework_wait_init", 00:05:31.019 "framework_start_init", 00:05:31.019 "scsi_get_devices", 00:05:31.019 "bdev_get_histogram", 00:05:31.019 "bdev_enable_histogram", 00:05:31.019 "bdev_set_qos_limit", 00:05:31.019 "bdev_set_qd_sampling_period", 00:05:31.019 "bdev_get_bdevs", 00:05:31.019 "bdev_reset_iostat", 00:05:31.019 "bdev_get_iostat", 00:05:31.019 "bdev_examine", 00:05:31.019 "bdev_wait_for_examine", 00:05:31.019 "bdev_set_options", 00:05:31.019 "notify_get_notifications", 00:05:31.019 "notify_get_types", 00:05:31.019 "accel_get_stats", 00:05:31.019 "accel_set_options", 00:05:31.019 "accel_set_driver", 00:05:31.019 "accel_crypto_key_destroy", 00:05:31.019 "accel_crypto_keys_get", 00:05:31.019 "accel_crypto_key_create", 00:05:31.019 "accel_assign_opc", 00:05:31.019 "accel_get_module_info", 00:05:31.019 "accel_get_opc_assignments", 00:05:31.019 "vmd_rescan", 00:05:31.019 "vmd_remove_device", 00:05:31.019 "vmd_enable", 00:05:31.019 "sock_get_default_impl", 00:05:31.019 "sock_set_default_impl", 00:05:31.019 "sock_impl_set_options", 00:05:31.019 "sock_impl_get_options", 00:05:31.019 "iobuf_get_stats", 00:05:31.019 "iobuf_set_options", 00:05:31.019 "keyring_get_keys", 00:05:31.019 "framework_get_pci_devices", 00:05:31.019 "framework_get_config", 00:05:31.019 "framework_get_subsystems", 00:05:31.019 "vfu_tgt_set_base_path", 00:05:31.019 "trace_get_info", 00:05:31.019 "trace_get_tpoint_group_mask", 00:05:31.019 "trace_disable_tpoint_group", 00:05:31.019 "trace_enable_tpoint_group", 00:05:31.019 "trace_clear_tpoint_mask", 00:05:31.019 "trace_set_tpoint_mask", 00:05:31.019 "spdk_get_version", 00:05:31.019 "rpc_get_methods" 00:05:31.019 ] 00:05:31.019 14:46:16 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:31.019 14:46:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:31.019 14:46:16 -- common/autotest_common.sh@10 -- # set +x 00:05:31.019 14:46:16 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:31.019 14:46:16 -- spdkcli/tcp.sh@38 -- # killprocess 3652384 00:05:31.019 14:46:16 -- common/autotest_common.sh@936 -- # '[' -z 3652384 ']' 00:05:31.019 14:46:16 -- common/autotest_common.sh@940 -- # kill -0 3652384 00:05:31.019 14:46:16 -- common/autotest_common.sh@941 -- # uname 00:05:31.019 14:46:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:31.019 14:46:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3652384 00:05:31.019 14:46:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:31.019 14:46:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:31.019 14:46:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3652384' 00:05:31.019 killing process with pid 3652384 00:05:31.019 14:46:16 -- common/autotest_common.sh@955 -- # kill 3652384 00:05:31.019 14:46:16 -- common/autotest_common.sh@960 -- # wait 3652384 00:05:31.585 00:05:31.585 real 0m1.209s 00:05:31.585 user 0m2.141s 00:05:31.585 sys 0m0.437s 00:05:31.585 14:46:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:31.585 14:46:17 -- common/autotest_common.sh@10 -- # set +x 00:05:31.585 ************************************ 00:05:31.585 END TEST spdkcli_tcp 00:05:31.585 ************************************ 00:05:31.585 14:46:17 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.585 14:46:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.585 14:46:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.585 14:46:17 -- common/autotest_common.sh@10 -- # set +x 00:05:31.585 ************************************ 00:05:31.585 START TEST dpdk_mem_utility 00:05:31.585 ************************************ 00:05:31.585 14:46:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.585 * Looking for test storage... 00:05:31.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:31.585 14:46:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:31.585 14:46:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3652597 00:05:31.585 14:46:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.585 14:46:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3652597 00:05:31.585 14:46:17 -- common/autotest_common.sh@817 -- # '[' -z 3652597 ']' 00:05:31.585 14:46:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.585 14:46:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:31.585 14:46:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.585 14:46:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:31.585 14:46:17 -- common/autotest_common.sh@10 -- # set +x 00:05:31.843 [2024-04-26 14:46:17.356368] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:05:31.843 [2024-04-26 14:46:17.356464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652597 ] 00:05:31.843 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.843 [2024-04-26 14:46:17.388422] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:31.843 [2024-04-26 14:46:17.414538] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.843 [2024-04-26 14:46:17.497298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.101 14:46:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:32.101 14:46:17 -- common/autotest_common.sh@850 -- # return 0 00:05:32.101 14:46:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:32.101 14:46:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:32.101 14:46:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:32.101 14:46:17 -- common/autotest_common.sh@10 -- # set +x 00:05:32.102 { 00:05:32.102 "filename": "/tmp/spdk_mem_dump.txt" 00:05:32.102 } 00:05:32.102 14:46:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:32.102 14:46:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:32.102 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:32.102 1 heaps totaling size 814.000000 MiB 00:05:32.102 size: 814.000000 MiB heap id: 0 00:05:32.102 end heaps---------- 00:05:32.102 8 mempools totaling size 598.116089 MiB 00:05:32.102 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:32.102 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:32.102 size: 84.521057 MiB name: bdev_io_3652597 00:05:32.102 size: 51.011292 MiB name: evtpool_3652597 00:05:32.102 size: 50.003479 MiB name: msgpool_3652597 00:05:32.102 size: 21.763794 MiB name: PDU_Pool 00:05:32.102 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:32.102 size: 0.026123 MiB name: Session_Pool 00:05:32.102 end mempools------- 00:05:32.102 6 memzones totaling size 4.142822 MiB 00:05:32.102 size: 1.000366 MiB name: RG_ring_0_3652597 00:05:32.102 size: 1.000366 MiB name: RG_ring_1_3652597 00:05:32.102 size: 1.000366 MiB name: RG_ring_4_3652597 00:05:32.102 size: 1.000366 MiB name: RG_ring_5_3652597 00:05:32.102 size: 0.125366 MiB name: RG_ring_2_3652597 00:05:32.102 size: 0.015991 MiB name: RG_ring_3_3652597 00:05:32.102 end memzones------- 00:05:32.102 14:46:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:32.360 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:32.360 list of free elements. size: 12.519348 MiB 00:05:32.360 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:32.360 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:32.360 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:32.360 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:32.360 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:32.360 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:32.360 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:32.361 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:32.361 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:32.361 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:32.361 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:32.361 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:32.361 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:32.361 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:32.361 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:32.361 list of standard malloc elements. size: 199.218079 MiB 00:05:32.361 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:32.361 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:32.361 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:32.361 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:32.361 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:32.361 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:32.361 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:32.361 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:32.361 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:32.361 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:32.361 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:32.361 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:32.361 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:32.361 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:32.361 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:32.361 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:32.361 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:32.361 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:32.361 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:32.361 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:32.361 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:32.361 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:32.361 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:32.361 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:32.361 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:32.361 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:32.361 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:32.361 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:32.361 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:32.361 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:32.361 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:32.361 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:32.361 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:32.361 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:32.361 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:32.361 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:32.361 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:32.361 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:32.361 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:32.361 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:32.361 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:32.361 list of memzone associated elements. size: 602.262573 MiB 00:05:32.361 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:32.361 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:32.361 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:32.361 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:32.361 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:32.361 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3652597_0 00:05:32.361 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:32.361 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3652597_0 00:05:32.361 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:32.361 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3652597_0 00:05:32.361 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:32.361 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:32.361 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:32.361 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:32.361 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:32.361 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3652597 00:05:32.361 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:32.361 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3652597 00:05:32.361 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:32.361 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3652597 00:05:32.361 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:32.361 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:32.361 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:32.361 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:32.361 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:32.361 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:32.361 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:32.361 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:32.361 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:32.361 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3652597 00:05:32.361 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:32.361 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3652597 00:05:32.361 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:32.361 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3652597 00:05:32.361 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:32.361 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3652597 00:05:32.361 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:32.361 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3652597 00:05:32.361 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:32.361 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:32.361 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:32.361 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:32.361 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:32.361 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:32.361 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:32.361 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3652597 00:05:32.361 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:32.361 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:32.361 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:32.361 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:32.361 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:32.361 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3652597 00:05:32.361 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:32.361 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:32.361 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:32.361 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3652597 00:05:32.361 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:32.361 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3652597 00:05:32.361 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:32.361 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:32.361 14:46:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:32.361 14:46:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3652597 00:05:32.361 14:46:17 -- common/autotest_common.sh@936 -- # '[' -z 3652597 ']' 00:05:32.361 14:46:17 -- common/autotest_common.sh@940 -- # kill -0 3652597 00:05:32.361 14:46:17 -- common/autotest_common.sh@941 -- # uname 00:05:32.361 14:46:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:32.361 14:46:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3652597 00:05:32.361 14:46:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:32.361 14:46:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:32.361 14:46:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3652597' 00:05:32.361 killing process with pid 3652597 00:05:32.361 14:46:17 -- common/autotest_common.sh@955 -- # kill 3652597 00:05:32.361 14:46:17 -- common/autotest_common.sh@960 -- # wait 3652597 00:05:32.619 00:05:32.619 real 0m1.039s 00:05:32.619 user 0m1.005s 00:05:32.619 sys 0m0.396s 00:05:32.619 14:46:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:32.619 14:46:18 -- common/autotest_common.sh@10 -- # set +x 00:05:32.619 ************************************ 00:05:32.619 END TEST dpdk_mem_utility 00:05:32.619 ************************************ 00:05:32.619 14:46:18 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:32.619 14:46:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.619 14:46:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.619 14:46:18 -- common/autotest_common.sh@10 -- # set +x 00:05:32.878 ************************************ 00:05:32.878 START TEST event 00:05:32.878 ************************************ 00:05:32.878 14:46:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:32.878 * Looking for test storage... 00:05:32.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:32.878 14:46:18 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:32.878 14:46:18 -- bdev/nbd_common.sh@6 -- # set -e 00:05:32.878 14:46:18 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:32.878 14:46:18 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:32.878 14:46:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.878 14:46:18 -- common/autotest_common.sh@10 -- # set +x 00:05:32.878 ************************************ 00:05:32.878 START TEST event_perf 00:05:32.878 ************************************ 00:05:32.878 14:46:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:32.878 Running I/O for 1 seconds...[2024-04-26 14:46:18.587841] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:05:32.878 [2024-04-26 14:46:18.587909] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652801 ] 00:05:32.878 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.136 [2024-04-26 14:46:18.621742] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:33.136 [2024-04-26 14:46:18.665228] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:33.136 [2024-04-26 14:46:18.756475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.136 [2024-04-26 14:46:18.756530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.136 [2024-04-26 14:46:18.756594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.136 [2024-04-26 14:46:18.756596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.509 Running I/O for 1 seconds... 00:05:34.509 lcore 0: 238858 00:05:34.509 lcore 1: 238856 00:05:34.509 lcore 2: 238856 00:05:34.509 lcore 3: 238856 00:05:34.509 done. 00:05:34.510 00:05:34.510 real 0m1.265s 00:05:34.510 user 0m4.161s 00:05:34.510 sys 0m0.099s 00:05:34.510 14:46:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.510 14:46:19 -- common/autotest_common.sh@10 -- # set +x 00:05:34.510 ************************************ 00:05:34.510 END TEST event_perf 00:05:34.510 ************************************ 00:05:34.510 14:46:19 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:34.510 14:46:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:34.510 14:46:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.510 14:46:19 -- common/autotest_common.sh@10 -- # set +x 00:05:34.510 ************************************ 00:05:34.510 START TEST event_reactor 00:05:34.510 ************************************ 00:05:34.510 14:46:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:34.510 [2024-04-26 14:46:19.968948] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:05:34.510 [2024-04-26 14:46:19.969012] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652969 ] 00:05:34.510 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.510 [2024-04-26 14:46:20.003015] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:34.510 [2024-04-26 14:46:20.035838] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.510 [2024-04-26 14:46:20.131551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.882 test_start 00:05:35.882 oneshot 00:05:35.882 tick 100 00:05:35.882 tick 100 00:05:35.882 tick 250 00:05:35.882 tick 100 00:05:35.882 tick 100 00:05:35.882 tick 100 00:05:35.882 tick 250 00:05:35.882 tick 500 00:05:35.882 tick 100 00:05:35.882 tick 100 00:05:35.882 tick 250 00:05:35.882 tick 100 00:05:35.882 tick 100 00:05:35.882 test_end 00:05:35.882 00:05:35.882 real 0m1.255s 00:05:35.882 user 0m1.167s 00:05:35.882 sys 0m0.083s 00:05:35.882 14:46:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.882 14:46:21 -- common/autotest_common.sh@10 -- # set +x 00:05:35.882 ************************************ 00:05:35.882 END TEST event_reactor 00:05:35.882 ************************************ 00:05:35.882 14:46:21 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:35.882 14:46:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:35.882 14:46:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.882 14:46:21 -- common/autotest_common.sh@10 -- # set +x 00:05:35.882 ************************************ 00:05:35.882 START TEST event_reactor_perf 00:05:35.882 ************************************ 00:05:35.882 14:46:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:35.882 [2024-04-26 14:46:21.338154] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:05:35.882 [2024-04-26 14:46:21.338218] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3653254 ] 00:05:35.882 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.882 [2024-04-26 14:46:21.371108] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:35.882 [2024-04-26 14:46:21.403046] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.882 [2024-04-26 14:46:21.494905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.252 test_start 00:05:37.252 test_end 00:05:37.252 Performance: 351495 events per second 00:05:37.252 00:05:37.252 real 0m1.254s 00:05:37.252 user 0m1.165s 00:05:37.252 sys 0m0.084s 00:05:37.252 14:46:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:37.252 14:46:22 -- common/autotest_common.sh@10 -- # set +x 00:05:37.252 ************************************ 00:05:37.252 END TEST event_reactor_perf 00:05:37.252 ************************************ 00:05:37.252 14:46:22 -- event/event.sh@49 -- # uname -s 00:05:37.252 14:46:22 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:37.252 14:46:22 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:37.252 14:46:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.252 14:46:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.252 14:46:22 -- common/autotest_common.sh@10 -- # set +x 00:05:37.252 ************************************ 00:05:37.252 START TEST event_scheduler 00:05:37.252 ************************************ 00:05:37.252 14:46:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:37.252 * Looking for test storage... 00:05:37.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:37.252 14:46:22 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:37.252 14:46:22 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3653440 00:05:37.252 14:46:22 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:37.252 14:46:22 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.252 14:46:22 -- scheduler/scheduler.sh@37 -- # waitforlisten 3653440 00:05:37.252 14:46:22 -- common/autotest_common.sh@817 -- # '[' -z 3653440 ']' 00:05:37.252 14:46:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.252 14:46:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:37.252 14:46:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.252 14:46:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:37.252 14:46:22 -- common/autotest_common.sh@10 -- # set +x 00:05:37.252 [2024-04-26 14:46:22.809601] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:05:37.252 [2024-04-26 14:46:22.809673] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3653440 ] 00:05:37.252 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.252 [2024-04-26 14:46:22.841722] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:37.253 [2024-04-26 14:46:22.867980] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:37.253 [2024-04-26 14:46:22.953450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.253 [2024-04-26 14:46:22.953508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.253 [2024-04-26 14:46:22.953575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.253 [2024-04-26 14:46:22.953578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.510 14:46:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:37.510 14:46:23 -- common/autotest_common.sh@850 -- # return 0 00:05:37.510 14:46:23 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:37.510 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:37.510 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:05:37.510 POWER: Env isn't set yet! 00:05:37.510 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:37.510 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:05:37.510 POWER: Cannot get available frequencies of lcore 0 00:05:37.510 POWER: Attempting to initialise PSTAT power management... 00:05:37.510 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:37.510 POWER: Initialized successfully for lcore 0 power management 00:05:37.510 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:37.510 POWER: Initialized successfully for lcore 1 power management 00:05:37.510 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:37.510 POWER: Initialized successfully for lcore 2 power management 00:05:37.510 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:37.510 POWER: Initialized successfully for lcore 3 power management 00:05:37.510 14:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:37.510 14:46:23 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:37.510 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:37.510 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:05:37.511 [2024-04-26 14:46:23.146071] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:37.511 14:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:37.511 14:46:23 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:37.511 14:46:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.511 14:46:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.511 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:05:37.511 ************************************ 00:05:37.511 START TEST scheduler_create_thread 00:05:37.511 ************************************ 00:05:37.511 14:46:23 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:05:37.511 14:46:23 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:37.511 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:37.511 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:05:37.801 2 00:05:37.801 14:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:37.801 14:46:23 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:37.801 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:37.801 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:05:37.801 3 00:05:37.801 14:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:37.801 14:46:23 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:37.801 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:37.801 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:05:37.801 4 00:05:37.801 14:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:37.801 14:46:23 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:37.801 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:37.801 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:05:37.801 5 00:05:37.801 14:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:37.801 14:46:23 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:37.801 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:37.801 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:05:37.801 6 00:05:37.801 14:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:37.801 14:46:23 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:37.801 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:37.801 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:05:37.801 7 00:05:37.801 14:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:37.801 14:46:23 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:37.801 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:37.801 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:05:37.801 8 00:05:37.801 14:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:37.801 14:46:23 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:37.801 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:37.801 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:05:37.801 9 00:05:37.801 14:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:37.801 14:46:23 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:37.801 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:37.801 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:05:37.801 10 00:05:37.801 14:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:37.801 14:46:23 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:37.801 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:37.801 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:05:37.801 14:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:37.801 14:46:23 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:37.801 14:46:23 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:37.801 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:37.801 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:05:38.369 14:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:38.369 14:46:23 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:38.369 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:38.369 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:05:39.739 14:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:39.739 14:46:25 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:39.739 14:46:25 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:39.739 14:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:39.739 14:46:25 -- common/autotest_common.sh@10 -- # set +x 00:05:40.671 14:46:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:40.671 00:05:40.671 real 0m3.097s 00:05:40.671 user 0m0.009s 00:05:40.671 sys 0m0.004s 00:05:40.671 14:46:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.671 14:46:26 -- common/autotest_common.sh@10 -- # set +x 00:05:40.671 ************************************ 00:05:40.671 END TEST scheduler_create_thread 00:05:40.671 ************************************ 00:05:40.671 14:46:26 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:40.671 14:46:26 -- scheduler/scheduler.sh@46 -- # killprocess 3653440 00:05:40.671 14:46:26 -- common/autotest_common.sh@936 -- # '[' -z 3653440 ']' 00:05:40.671 14:46:26 -- common/autotest_common.sh@940 -- # kill -0 3653440 00:05:40.671 14:46:26 -- common/autotest_common.sh@941 -- # uname 00:05:40.671 14:46:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:40.671 14:46:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3653440 00:05:40.671 14:46:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:40.671 14:46:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:40.671 14:46:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3653440' 00:05:40.671 killing process with pid 3653440 00:05:40.671 14:46:26 -- common/autotest_common.sh@955 -- # kill 3653440 00:05:40.671 14:46:26 -- common/autotest_common.sh@960 -- # wait 3653440 00:05:41.238 [2024-04-26 14:46:26.726367] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:41.238 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:05:41.238 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:41.238 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:05:41.238 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:41.238 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:05:41.238 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:41.238 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:05:41.238 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:41.238 00:05:41.238 real 0m4.252s 00:05:41.238 user 0m6.981s 00:05:41.238 sys 0m0.369s 00:05:41.238 14:46:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:41.238 14:46:26 -- common/autotest_common.sh@10 -- # set +x 00:05:41.238 ************************************ 00:05:41.238 END TEST event_scheduler 00:05:41.238 ************************************ 00:05:41.497 14:46:26 -- event/event.sh@51 -- # modprobe -n nbd 00:05:41.497 14:46:26 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:41.497 14:46:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.497 14:46:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.497 14:46:26 -- common/autotest_common.sh@10 -- # set +x 00:05:41.497 ************************************ 00:05:41.497 START TEST app_repeat 00:05:41.497 ************************************ 00:05:41.497 14:46:27 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:05:41.497 14:46:27 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.497 14:46:27 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.497 14:46:27 -- event/event.sh@13 -- # local nbd_list 00:05:41.497 14:46:27 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.497 14:46:27 -- event/event.sh@14 -- # local bdev_list 00:05:41.497 14:46:27 -- event/event.sh@15 -- # local repeat_times=4 00:05:41.497 14:46:27 -- event/event.sh@17 -- # modprobe nbd 00:05:41.497 14:46:27 -- event/event.sh@19 -- # repeat_pid=3654034 00:05:41.497 14:46:27 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:41.497 14:46:27 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.497 14:46:27 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3654034' 00:05:41.497 Process app_repeat pid: 3654034 00:05:41.497 14:46:27 -- event/event.sh@23 -- # for i in {0..2} 00:05:41.497 14:46:27 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:41.497 spdk_app_start Round 0 00:05:41.497 14:46:27 -- event/event.sh@25 -- # waitforlisten 3654034 /var/tmp/spdk-nbd.sock 00:05:41.497 14:46:27 -- common/autotest_common.sh@817 -- # '[' -z 3654034 ']' 00:05:41.497 14:46:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.497 14:46:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:41.497 14:46:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.497 14:46:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:41.497 14:46:27 -- common/autotest_common.sh@10 -- # set +x 00:05:41.497 [2024-04-26 14:46:27.121771] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:05:41.497 [2024-04-26 14:46:27.121837] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3654034 ] 00:05:41.497 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.497 [2024-04-26 14:46:27.154396] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:41.497 [2024-04-26 14:46:27.185889] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.755 [2024-04-26 14:46:27.274594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.755 [2024-04-26 14:46:27.274598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.755 14:46:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:41.755 14:46:27 -- common/autotest_common.sh@850 -- # return 0 00:05:41.755 14:46:27 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.012 Malloc0 00:05:42.012 14:46:27 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.270 Malloc1 00:05:42.270 14:46:27 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.270 14:46:27 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.270 14:46:27 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.270 14:46:27 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:42.270 14:46:27 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.270 14:46:27 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:42.270 14:46:27 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.270 14:46:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.270 14:46:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.270 14:46:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:42.270 14:46:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.270 14:46:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:42.270 14:46:27 -- bdev/nbd_common.sh@12 -- # local i 00:05:42.270 14:46:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:42.270 14:46:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.270 14:46:27 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:42.528 /dev/nbd0 00:05:42.528 14:46:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:42.528 14:46:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:42.528 14:46:28 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:42.528 14:46:28 -- common/autotest_common.sh@855 -- # local i 00:05:42.528 14:46:28 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:42.528 14:46:28 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:42.528 14:46:28 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:42.528 14:46:28 -- common/autotest_common.sh@859 -- # break 00:05:42.528 14:46:28 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:42.528 14:46:28 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:42.528 14:46:28 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.528 1+0 records in 00:05:42.528 1+0 records out 00:05:42.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173873 s, 23.6 MB/s 00:05:42.528 14:46:28 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.528 14:46:28 -- common/autotest_common.sh@872 -- # size=4096 00:05:42.528 14:46:28 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.528 14:46:28 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:42.528 14:46:28 -- common/autotest_common.sh@875 -- # return 0 00:05:42.528 14:46:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.528 14:46:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.528 14:46:28 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:42.785 /dev/nbd1 00:05:42.785 14:46:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:42.785 14:46:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:42.785 14:46:28 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:42.786 14:46:28 -- common/autotest_common.sh@855 -- # local i 00:05:42.786 14:46:28 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:42.786 14:46:28 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:42.786 14:46:28 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:42.786 14:46:28 -- common/autotest_common.sh@859 -- # break 00:05:42.786 14:46:28 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:42.786 14:46:28 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:42.786 14:46:28 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.786 1+0 records in 00:05:42.786 1+0 records out 00:05:42.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194866 s, 21.0 MB/s 00:05:42.786 14:46:28 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.786 14:46:28 -- common/autotest_common.sh@872 -- # size=4096 00:05:42.786 14:46:28 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.786 14:46:28 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:42.786 14:46:28 -- common/autotest_common.sh@875 -- # return 0 00:05:42.786 14:46:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.786 14:46:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.786 14:46:28 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.786 14:46:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.786 14:46:28 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.043 14:46:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.043 { 00:05:43.043 "nbd_device": "/dev/nbd0", 00:05:43.043 "bdev_name": "Malloc0" 00:05:43.043 }, 00:05:43.043 { 00:05:43.043 "nbd_device": "/dev/nbd1", 00:05:43.043 "bdev_name": "Malloc1" 00:05:43.043 } 00:05:43.043 ]' 00:05:43.043 14:46:28 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.043 { 00:05:43.043 "nbd_device": "/dev/nbd0", 00:05:43.043 "bdev_name": "Malloc0" 00:05:43.043 }, 00:05:43.043 { 00:05:43.043 "nbd_device": "/dev/nbd1", 00:05:43.043 "bdev_name": "Malloc1" 00:05:43.043 } 00:05:43.043 ]' 00:05:43.043 14:46:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.043 14:46:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.043 /dev/nbd1' 00:05:43.043 14:46:28 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.043 /dev/nbd1' 00:05:43.043 14:46:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.043 14:46:28 -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.043 14:46:28 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.043 14:46:28 -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.043 14:46:28 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.043 14:46:28 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.043 14:46:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.043 14:46:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.044 14:46:28 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.044 14:46:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.044 14:46:28 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.044 14:46:28 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.044 256+0 records in 00:05:43.044 256+0 records out 00:05:43.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00418752 s, 250 MB/s 00:05:43.044 14:46:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.044 14:46:28 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.044 256+0 records in 00:05:43.044 256+0 records out 00:05:43.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239213 s, 43.8 MB/s 00:05:43.044 14:46:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.044 14:46:28 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.301 256+0 records in 00:05:43.301 256+0 records out 00:05:43.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288021 s, 36.4 MB/s 00:05:43.301 14:46:28 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.301 14:46:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.301 14:46:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.301 14:46:28 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.301 14:46:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.301 14:46:28 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.301 14:46:28 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.301 14:46:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.301 14:46:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.301 14:46:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.301 14:46:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.301 14:46:28 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.301 14:46:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.301 14:46:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.301 14:46:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.301 14:46:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.301 14:46:28 -- bdev/nbd_common.sh@51 -- # local i 00:05:43.301 14:46:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.301 14:46:28 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.301 14:46:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.301 14:46:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.301 14:46:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.301 14:46:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.301 14:46:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.301 14:46:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.301 14:46:29 -- bdev/nbd_common.sh@41 -- # break 00:05:43.301 14:46:29 -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.301 14:46:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.301 14:46:29 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.558 14:46:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.558 14:46:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.558 14:46:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.558 14:46:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.558 14:46:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.558 14:46:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.558 14:46:29 -- bdev/nbd_common.sh@41 -- # break 00:05:43.558 14:46:29 -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.558 14:46:29 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.558 14:46:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.558 14:46:29 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.815 14:46:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.815 14:46:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.815 14:46:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.092 14:46:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.092 14:46:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.092 14:46:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.092 14:46:29 -- bdev/nbd_common.sh@65 -- # true 00:05:44.092 14:46:29 -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.092 14:46:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.092 14:46:29 -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.092 14:46:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.092 14:46:29 -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.092 14:46:29 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.350 14:46:29 -- event/event.sh@35 -- # sleep 3 00:05:44.350 [2024-04-26 14:46:30.075834] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.609 [2024-04-26 14:46:30.169370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.609 [2024-04-26 14:46:30.169373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.609 [2024-04-26 14:46:30.228920] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.609 [2024-04-26 14:46:30.228979] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.134 14:46:32 -- event/event.sh@23 -- # for i in {0..2} 00:05:47.134 14:46:32 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:47.134 spdk_app_start Round 1 00:05:47.134 14:46:32 -- event/event.sh@25 -- # waitforlisten 3654034 /var/tmp/spdk-nbd.sock 00:05:47.134 14:46:32 -- common/autotest_common.sh@817 -- # '[' -z 3654034 ']' 00:05:47.134 14:46:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.134 14:46:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:47.134 14:46:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.134 14:46:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:47.134 14:46:32 -- common/autotest_common.sh@10 -- # set +x 00:05:47.392 14:46:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:47.392 14:46:33 -- common/autotest_common.sh@850 -- # return 0 00:05:47.392 14:46:33 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.670 Malloc0 00:05:47.670 14:46:33 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.932 Malloc1 00:05:47.932 14:46:33 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.932 14:46:33 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.932 14:46:33 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.932 14:46:33 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:47.932 14:46:33 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.932 14:46:33 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:47.932 14:46:33 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.932 14:46:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.932 14:46:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.932 14:46:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:47.932 14:46:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.932 14:46:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:47.932 14:46:33 -- bdev/nbd_common.sh@12 -- # local i 00:05:47.932 14:46:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:47.932 14:46:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.932 14:46:33 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.190 /dev/nbd0 00:05:48.190 14:46:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.190 14:46:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.190 14:46:33 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:48.190 14:46:33 -- common/autotest_common.sh@855 -- # local i 00:05:48.190 14:46:33 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:48.190 14:46:33 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:48.190 14:46:33 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:48.190 14:46:33 -- common/autotest_common.sh@859 -- # break 00:05:48.190 14:46:33 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:48.190 14:46:33 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:48.190 14:46:33 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.190 1+0 records in 00:05:48.190 1+0 records out 00:05:48.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210046 s, 19.5 MB/s 00:05:48.190 14:46:33 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.190 14:46:33 -- common/autotest_common.sh@872 -- # size=4096 00:05:48.190 14:46:33 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.190 14:46:33 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:48.190 14:46:33 -- common/autotest_common.sh@875 -- # return 0 00:05:48.190 14:46:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.190 14:46:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.190 14:46:33 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.448 /dev/nbd1 00:05:48.448 14:46:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.448 14:46:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.448 14:46:34 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:48.448 14:46:34 -- common/autotest_common.sh@855 -- # local i 00:05:48.448 14:46:34 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:48.448 14:46:34 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:48.448 14:46:34 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:48.448 14:46:34 -- common/autotest_common.sh@859 -- # break 00:05:48.448 14:46:34 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:48.448 14:46:34 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:48.448 14:46:34 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.448 1+0 records in 00:05:48.448 1+0 records out 00:05:48.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183055 s, 22.4 MB/s 00:05:48.448 14:46:34 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.448 14:46:34 -- common/autotest_common.sh@872 -- # size=4096 00:05:48.448 14:46:34 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.448 14:46:34 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:48.448 14:46:34 -- common/autotest_common.sh@875 -- # return 0 00:05:48.448 14:46:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.448 14:46:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.448 14:46:34 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.448 14:46:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.448 14:46:34 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.705 14:46:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:48.705 { 00:05:48.705 "nbd_device": "/dev/nbd0", 00:05:48.705 "bdev_name": "Malloc0" 00:05:48.705 }, 00:05:48.705 { 00:05:48.705 "nbd_device": "/dev/nbd1", 00:05:48.705 "bdev_name": "Malloc1" 00:05:48.705 } 00:05:48.705 ]' 00:05:48.705 14:46:34 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:48.705 { 00:05:48.705 "nbd_device": "/dev/nbd0", 00:05:48.705 "bdev_name": "Malloc0" 00:05:48.705 }, 00:05:48.705 { 00:05:48.705 "nbd_device": "/dev/nbd1", 00:05:48.705 "bdev_name": "Malloc1" 00:05:48.705 } 00:05:48.705 ]' 00:05:48.705 14:46:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.705 14:46:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.705 /dev/nbd1' 00:05:48.705 14:46:34 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.705 /dev/nbd1' 00:05:48.705 14:46:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.705 14:46:34 -- bdev/nbd_common.sh@65 -- # count=2 00:05:48.705 14:46:34 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:48.705 14:46:34 -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.705 14:46:34 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.705 14:46:34 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.705 14:46:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.705 14:46:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.705 14:46:34 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.705 14:46:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.705 14:46:34 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.705 14:46:34 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.705 256+0 records in 00:05:48.705 256+0 records out 00:05:48.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503796 s, 208 MB/s 00:05:48.705 14:46:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.705 14:46:34 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.962 256+0 records in 00:05:48.962 256+0 records out 00:05:48.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271685 s, 38.6 MB/s 00:05:48.962 14:46:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.962 14:46:34 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:48.962 256+0 records in 00:05:48.962 256+0 records out 00:05:48.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260207 s, 40.3 MB/s 00:05:48.962 14:46:34 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:48.962 14:46:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.962 14:46:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.962 14:46:34 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:48.962 14:46:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.963 14:46:34 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:48.963 14:46:34 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:48.963 14:46:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.963 14:46:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:48.963 14:46:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.963 14:46:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:48.963 14:46:34 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.963 14:46:34 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:48.963 14:46:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.963 14:46:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.963 14:46:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:48.963 14:46:34 -- bdev/nbd_common.sh@51 -- # local i 00:05:48.963 14:46:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.963 14:46:34 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.220 14:46:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.220 14:46:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.220 14:46:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.220 14:46:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.220 14:46:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.220 14:46:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.220 14:46:34 -- bdev/nbd_common.sh@41 -- # break 00:05:49.220 14:46:34 -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.220 14:46:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.220 14:46:34 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.478 14:46:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.478 14:46:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.478 14:46:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.478 14:46:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.478 14:46:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.478 14:46:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.478 14:46:34 -- bdev/nbd_common.sh@41 -- # break 00:05:49.478 14:46:34 -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.478 14:46:34 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.478 14:46:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.478 14:46:34 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.735 14:46:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.735 14:46:35 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.735 14:46:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.735 14:46:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.735 14:46:35 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.735 14:46:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.735 14:46:35 -- bdev/nbd_common.sh@65 -- # true 00:05:49.735 14:46:35 -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.735 14:46:35 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.735 14:46:35 -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.735 14:46:35 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.735 14:46:35 -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.735 14:46:35 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.992 14:46:35 -- event/event.sh@35 -- # sleep 3 00:05:50.250 [2024-04-26 14:46:35.774160] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.250 [2024-04-26 14:46:35.861589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.250 [2024-04-26 14:46:35.861591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.250 [2024-04-26 14:46:35.920216] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.250 [2024-04-26 14:46:35.920280] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.529 14:46:38 -- event/event.sh@23 -- # for i in {0..2} 00:05:53.529 14:46:38 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:53.529 spdk_app_start Round 2 00:05:53.529 14:46:38 -- event/event.sh@25 -- # waitforlisten 3654034 /var/tmp/spdk-nbd.sock 00:05:53.529 14:46:38 -- common/autotest_common.sh@817 -- # '[' -z 3654034 ']' 00:05:53.529 14:46:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.529 14:46:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:53.529 14:46:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.529 14:46:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:53.529 14:46:38 -- common/autotest_common.sh@10 -- # set +x 00:05:53.529 14:46:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:53.529 14:46:38 -- common/autotest_common.sh@850 -- # return 0 00:05:53.529 14:46:38 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.529 Malloc0 00:05:53.529 14:46:39 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.787 Malloc1 00:05:53.787 14:46:39 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.787 14:46:39 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.787 14:46:39 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.787 14:46:39 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.787 14:46:39 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.787 14:46:39 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.787 14:46:39 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.787 14:46:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.787 14:46:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.787 14:46:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.787 14:46:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.787 14:46:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.787 14:46:39 -- bdev/nbd_common.sh@12 -- # local i 00:05:53.787 14:46:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.787 14:46:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.787 14:46:39 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.044 /dev/nbd0 00:05:54.044 14:46:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.044 14:46:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.044 14:46:39 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:54.044 14:46:39 -- common/autotest_common.sh@855 -- # local i 00:05:54.044 14:46:39 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:54.044 14:46:39 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:54.044 14:46:39 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:54.044 14:46:39 -- common/autotest_common.sh@859 -- # break 00:05:54.044 14:46:39 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:54.044 14:46:39 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:54.044 14:46:39 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.044 1+0 records in 00:05:54.044 1+0 records out 00:05:54.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000156137 s, 26.2 MB/s 00:05:54.044 14:46:39 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.044 14:46:39 -- common/autotest_common.sh@872 -- # size=4096 00:05:54.044 14:46:39 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.044 14:46:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:54.044 14:46:39 -- common/autotest_common.sh@875 -- # return 0 00:05:54.044 14:46:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.044 14:46:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.044 14:46:39 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.302 /dev/nbd1 00:05:54.302 14:46:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.302 14:46:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.302 14:46:39 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:54.302 14:46:39 -- common/autotest_common.sh@855 -- # local i 00:05:54.302 14:46:39 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:54.302 14:46:39 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:54.302 14:46:39 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:54.302 14:46:39 -- common/autotest_common.sh@859 -- # break 00:05:54.302 14:46:39 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:54.302 14:46:39 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:54.302 14:46:39 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.302 1+0 records in 00:05:54.302 1+0 records out 00:05:54.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181956 s, 22.5 MB/s 00:05:54.302 14:46:39 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.302 14:46:39 -- common/autotest_common.sh@872 -- # size=4096 00:05:54.302 14:46:39 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.302 14:46:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:54.302 14:46:39 -- common/autotest_common.sh@875 -- # return 0 00:05:54.302 14:46:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.302 14:46:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.302 14:46:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.302 14:46:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.302 14:46:39 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.561 { 00:05:54.561 "nbd_device": "/dev/nbd0", 00:05:54.561 "bdev_name": "Malloc0" 00:05:54.561 }, 00:05:54.561 { 00:05:54.561 "nbd_device": "/dev/nbd1", 00:05:54.561 "bdev_name": "Malloc1" 00:05:54.561 } 00:05:54.561 ]' 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.561 { 00:05:54.561 "nbd_device": "/dev/nbd0", 00:05:54.561 "bdev_name": "Malloc0" 00:05:54.561 }, 00:05:54.561 { 00:05:54.561 "nbd_device": "/dev/nbd1", 00:05:54.561 "bdev_name": "Malloc1" 00:05:54.561 } 00:05:54.561 ]' 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.561 /dev/nbd1' 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.561 /dev/nbd1' 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.561 256+0 records in 00:05:54.561 256+0 records out 00:05:54.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495727 s, 212 MB/s 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.561 256+0 records in 00:05:54.561 256+0 records out 00:05:54.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239821 s, 43.7 MB/s 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.561 256+0 records in 00:05:54.561 256+0 records out 00:05:54.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226033 s, 46.4 MB/s 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@51 -- # local i 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.561 14:46:40 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.831 14:46:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.831 14:46:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.831 14:46:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.831 14:46:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.831 14:46:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.831 14:46:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.831 14:46:40 -- bdev/nbd_common.sh@41 -- # break 00:05:54.831 14:46:40 -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.831 14:46:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.831 14:46:40 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.088 14:46:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.088 14:46:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.088 14:46:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.088 14:46:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.088 14:46:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.088 14:46:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.088 14:46:40 -- bdev/nbd_common.sh@41 -- # break 00:05:55.088 14:46:40 -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.088 14:46:40 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.088 14:46:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.088 14:46:40 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.346 14:46:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.346 14:46:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.346 14:46:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.346 14:46:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.346 14:46:41 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.346 14:46:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.346 14:46:41 -- bdev/nbd_common.sh@65 -- # true 00:05:55.346 14:46:41 -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.346 14:46:41 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.346 14:46:41 -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.346 14:46:41 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.346 14:46:41 -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.346 14:46:41 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.605 14:46:41 -- event/event.sh@35 -- # sleep 3 00:05:55.863 [2024-04-26 14:46:41.509930] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.863 [2024-04-26 14:46:41.597982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.863 [2024-04-26 14:46:41.597985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.120 [2024-04-26 14:46:41.654426] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.120 [2024-04-26 14:46:41.654505] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:58.642 14:46:44 -- event/event.sh@38 -- # waitforlisten 3654034 /var/tmp/spdk-nbd.sock 00:05:58.642 14:46:44 -- common/autotest_common.sh@817 -- # '[' -z 3654034 ']' 00:05:58.642 14:46:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.642 14:46:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:58.642 14:46:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.642 14:46:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:58.642 14:46:44 -- common/autotest_common.sh@10 -- # set +x 00:05:58.900 14:46:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:58.900 14:46:44 -- common/autotest_common.sh@850 -- # return 0 00:05:58.900 14:46:44 -- event/event.sh@39 -- # killprocess 3654034 00:05:58.900 14:46:44 -- common/autotest_common.sh@936 -- # '[' -z 3654034 ']' 00:05:58.900 14:46:44 -- common/autotest_common.sh@940 -- # kill -0 3654034 00:05:58.900 14:46:44 -- common/autotest_common.sh@941 -- # uname 00:05:58.900 14:46:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:58.900 14:46:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3654034 00:05:58.900 14:46:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:58.900 14:46:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:58.900 14:46:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3654034' 00:05:58.900 killing process with pid 3654034 00:05:58.900 14:46:44 -- common/autotest_common.sh@955 -- # kill 3654034 00:05:58.900 14:46:44 -- common/autotest_common.sh@960 -- # wait 3654034 00:05:59.158 spdk_app_start is called in Round 0. 00:05:59.158 Shutdown signal received, stop current app iteration 00:05:59.158 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 reinitialization... 00:05:59.158 spdk_app_start is called in Round 1. 00:05:59.158 Shutdown signal received, stop current app iteration 00:05:59.158 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 reinitialization... 00:05:59.158 spdk_app_start is called in Round 2. 00:05:59.158 Shutdown signal received, stop current app iteration 00:05:59.158 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 reinitialization... 00:05:59.158 spdk_app_start is called in Round 3. 00:05:59.158 Shutdown signal received, stop current app iteration 00:05:59.158 14:46:44 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:59.158 14:46:44 -- event/event.sh@42 -- # return 0 00:05:59.158 00:05:59.158 real 0m17.667s 00:05:59.158 user 0m38.842s 00:05:59.158 sys 0m3.223s 00:05:59.158 14:46:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.158 14:46:44 -- common/autotest_common.sh@10 -- # set +x 00:05:59.158 ************************************ 00:05:59.158 END TEST app_repeat 00:05:59.158 ************************************ 00:05:59.158 14:46:44 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:59.158 14:46:44 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:59.158 14:46:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.158 14:46:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.159 14:46:44 -- common/autotest_common.sh@10 -- # set +x 00:05:59.159 ************************************ 00:05:59.159 START TEST cpu_locks 00:05:59.159 ************************************ 00:05:59.159 14:46:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:59.417 * Looking for test storage... 00:05:59.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:59.417 14:46:44 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:59.417 14:46:44 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:59.417 14:46:44 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:59.417 14:46:44 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:59.417 14:46:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.417 14:46:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.417 14:46:44 -- common/autotest_common.sh@10 -- # set +x 00:05:59.417 ************************************ 00:05:59.417 START TEST default_locks 00:05:59.417 ************************************ 00:05:59.417 14:46:45 -- common/autotest_common.sh@1111 -- # default_locks 00:05:59.417 14:46:45 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3656396 00:05:59.417 14:46:45 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.417 14:46:45 -- event/cpu_locks.sh@47 -- # waitforlisten 3656396 00:05:59.417 14:46:45 -- common/autotest_common.sh@817 -- # '[' -z 3656396 ']' 00:05:59.417 14:46:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.417 14:46:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:59.417 14:46:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.417 14:46:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:59.417 14:46:45 -- common/autotest_common.sh@10 -- # set +x 00:05:59.417 [2024-04-26 14:46:45.068144] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:05:59.417 [2024-04-26 14:46:45.068227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3656396 ] 00:05:59.417 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.417 [2024-04-26 14:46:45.100910] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:59.417 [2024-04-26 14:46:45.132546] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.676 [2024-04-26 14:46:45.225377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.934 14:46:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:59.934 14:46:45 -- common/autotest_common.sh@850 -- # return 0 00:05:59.934 14:46:45 -- event/cpu_locks.sh@49 -- # locks_exist 3656396 00:05:59.934 14:46:45 -- event/cpu_locks.sh@22 -- # lslocks -p 3656396 00:05:59.934 14:46:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.191 lslocks: write error 00:06:00.191 14:46:45 -- event/cpu_locks.sh@50 -- # killprocess 3656396 00:06:00.191 14:46:45 -- common/autotest_common.sh@936 -- # '[' -z 3656396 ']' 00:06:00.191 14:46:45 -- common/autotest_common.sh@940 -- # kill -0 3656396 00:06:00.191 14:46:45 -- common/autotest_common.sh@941 -- # uname 00:06:00.191 14:46:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:00.191 14:46:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3656396 00:06:00.191 14:46:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:00.191 14:46:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:00.191 14:46:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3656396' 00:06:00.191 killing process with pid 3656396 00:06:00.191 14:46:45 -- common/autotest_common.sh@955 -- # kill 3656396 00:06:00.191 14:46:45 -- common/autotest_common.sh@960 -- # wait 3656396 00:06:00.757 14:46:46 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3656396 00:06:00.757 14:46:46 -- common/autotest_common.sh@638 -- # local es=0 00:06:00.757 14:46:46 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3656396 00:06:00.757 14:46:46 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:00.757 14:46:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:00.757 14:46:46 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:00.757 14:46:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:00.757 14:46:46 -- common/autotest_common.sh@641 -- # waitforlisten 3656396 00:06:00.757 14:46:46 -- common/autotest_common.sh@817 -- # '[' -z 3656396 ']' 00:06:00.757 14:46:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.757 14:46:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:00.757 14:46:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.757 14:46:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:00.757 14:46:46 -- common/autotest_common.sh@10 -- # set +x 00:06:00.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3656396) - No such process 00:06:00.757 ERROR: process (pid: 3656396) is no longer running 00:06:00.757 14:46:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:00.757 14:46:46 -- common/autotest_common.sh@850 -- # return 1 00:06:00.757 14:46:46 -- common/autotest_common.sh@641 -- # es=1 00:06:00.757 14:46:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:00.757 14:46:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:00.757 14:46:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:00.757 14:46:46 -- event/cpu_locks.sh@54 -- # no_locks 00:06:00.757 14:46:46 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:00.757 14:46:46 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:00.757 14:46:46 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:00.757 00:06:00.758 real 0m1.229s 00:06:00.758 user 0m1.175s 00:06:00.758 sys 0m0.568s 00:06:00.758 14:46:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.758 14:46:46 -- common/autotest_common.sh@10 -- # set +x 00:06:00.758 ************************************ 00:06:00.758 END TEST default_locks 00:06:00.758 ************************************ 00:06:00.758 14:46:46 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:00.758 14:46:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.758 14:46:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.758 14:46:46 -- common/autotest_common.sh@10 -- # set +x 00:06:00.758 ************************************ 00:06:00.758 START TEST default_locks_via_rpc 00:06:00.758 ************************************ 00:06:00.758 14:46:46 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:06:00.758 14:46:46 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3656575 00:06:00.758 14:46:46 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.758 14:46:46 -- event/cpu_locks.sh@63 -- # waitforlisten 3656575 00:06:00.758 14:46:46 -- common/autotest_common.sh@817 -- # '[' -z 3656575 ']' 00:06:00.758 14:46:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.758 14:46:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:00.758 14:46:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.758 14:46:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:00.758 14:46:46 -- common/autotest_common.sh@10 -- # set +x 00:06:00.758 [2024-04-26 14:46:46.422987] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:00.758 [2024-04-26 14:46:46.423127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3656575 ] 00:06:00.758 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.758 [2024-04-26 14:46:46.456146] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.758 [2024-04-26 14:46:46.482374] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.015 [2024-04-26 14:46:46.569706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.273 14:46:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:01.273 14:46:46 -- common/autotest_common.sh@850 -- # return 0 00:06:01.273 14:46:46 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:01.273 14:46:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.273 14:46:46 -- common/autotest_common.sh@10 -- # set +x 00:06:01.273 14:46:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.273 14:46:46 -- event/cpu_locks.sh@67 -- # no_locks 00:06:01.273 14:46:46 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:01.273 14:46:46 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:01.273 14:46:46 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:01.273 14:46:46 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.273 14:46:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.273 14:46:46 -- common/autotest_common.sh@10 -- # set +x 00:06:01.273 14:46:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.273 14:46:46 -- event/cpu_locks.sh@71 -- # locks_exist 3656575 00:06:01.273 14:46:46 -- event/cpu_locks.sh@22 -- # lslocks -p 3656575 00:06:01.273 14:46:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.530 14:46:47 -- event/cpu_locks.sh@73 -- # killprocess 3656575 00:06:01.530 14:46:47 -- common/autotest_common.sh@936 -- # '[' -z 3656575 ']' 00:06:01.530 14:46:47 -- common/autotest_common.sh@940 -- # kill -0 3656575 00:06:01.530 14:46:47 -- common/autotest_common.sh@941 -- # uname 00:06:01.530 14:46:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:01.530 14:46:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3656575 00:06:01.530 14:46:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:01.530 14:46:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:01.530 14:46:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3656575' 00:06:01.530 killing process with pid 3656575 00:06:01.530 14:46:47 -- common/autotest_common.sh@955 -- # kill 3656575 00:06:01.530 14:46:47 -- common/autotest_common.sh@960 -- # wait 3656575 00:06:02.096 00:06:02.097 real 0m1.166s 00:06:02.097 user 0m1.099s 00:06:02.097 sys 0m0.541s 00:06:02.097 14:46:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.097 14:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:02.097 ************************************ 00:06:02.097 END TEST default_locks_via_rpc 00:06:02.097 ************************************ 00:06:02.097 14:46:47 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:02.097 14:46:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.097 14:46:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.097 14:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:02.097 ************************************ 00:06:02.097 START TEST non_locking_app_on_locked_coremask 00:06:02.097 ************************************ 00:06:02.097 14:46:47 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:06:02.097 14:46:47 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3656741 00:06:02.097 14:46:47 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.097 14:46:47 -- event/cpu_locks.sh@81 -- # waitforlisten 3656741 /var/tmp/spdk.sock 00:06:02.097 14:46:47 -- common/autotest_common.sh@817 -- # '[' -z 3656741 ']' 00:06:02.097 14:46:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.097 14:46:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:02.097 14:46:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.097 14:46:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:02.097 14:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:02.097 [2024-04-26 14:46:47.719173] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:02.097 [2024-04-26 14:46:47.719285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3656741 ] 00:06:02.097 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.097 [2024-04-26 14:46:47.752398] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:02.097 [2024-04-26 14:46:47.778843] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.354 [2024-04-26 14:46:47.866574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.621 14:46:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:02.621 14:46:48 -- common/autotest_common.sh@850 -- # return 0 00:06:02.621 14:46:48 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3656746 00:06:02.621 14:46:48 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:02.621 14:46:48 -- event/cpu_locks.sh@85 -- # waitforlisten 3656746 /var/tmp/spdk2.sock 00:06:02.621 14:46:48 -- common/autotest_common.sh@817 -- # '[' -z 3656746 ']' 00:06:02.621 14:46:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.621 14:46:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:02.621 14:46:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.621 14:46:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:02.621 14:46:48 -- common/autotest_common.sh@10 -- # set +x 00:06:02.621 [2024-04-26 14:46:48.171299] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:02.621 [2024-04-26 14:46:48.171404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3656746 ] 00:06:02.621 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.621 [2024-04-26 14:46:48.207868] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:02.621 [2024-04-26 14:46:48.271113] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.621 [2024-04-26 14:46:48.271143] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.878 [2024-04-26 14:46:48.449201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.451 14:46:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:03.451 14:46:49 -- common/autotest_common.sh@850 -- # return 0 00:06:03.451 14:46:49 -- event/cpu_locks.sh@87 -- # locks_exist 3656741 00:06:03.451 14:46:49 -- event/cpu_locks.sh@22 -- # lslocks -p 3656741 00:06:03.451 14:46:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.016 lslocks: write error 00:06:04.016 14:46:49 -- event/cpu_locks.sh@89 -- # killprocess 3656741 00:06:04.016 14:46:49 -- common/autotest_common.sh@936 -- # '[' -z 3656741 ']' 00:06:04.016 14:46:49 -- common/autotest_common.sh@940 -- # kill -0 3656741 00:06:04.016 14:46:49 -- common/autotest_common.sh@941 -- # uname 00:06:04.016 14:46:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.016 14:46:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3656741 00:06:04.016 14:46:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.016 14:46:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.016 14:46:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3656741' 00:06:04.016 killing process with pid 3656741 00:06:04.016 14:46:49 -- common/autotest_common.sh@955 -- # kill 3656741 00:06:04.016 14:46:49 -- common/autotest_common.sh@960 -- # wait 3656741 00:06:04.949 14:46:50 -- event/cpu_locks.sh@90 -- # killprocess 3656746 00:06:04.949 14:46:50 -- common/autotest_common.sh@936 -- # '[' -z 3656746 ']' 00:06:04.949 14:46:50 -- common/autotest_common.sh@940 -- # kill -0 3656746 00:06:04.949 14:46:50 -- common/autotest_common.sh@941 -- # uname 00:06:04.949 14:46:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.949 14:46:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3656746 00:06:04.949 14:46:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.949 14:46:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.949 14:46:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3656746' 00:06:04.949 killing process with pid 3656746 00:06:04.949 14:46:50 -- common/autotest_common.sh@955 -- # kill 3656746 00:06:04.949 14:46:50 -- common/autotest_common.sh@960 -- # wait 3656746 00:06:05.207 00:06:05.207 real 0m3.102s 00:06:05.207 user 0m3.229s 00:06:05.207 sys 0m1.044s 00:06:05.207 14:46:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.207 14:46:50 -- common/autotest_common.sh@10 -- # set +x 00:06:05.207 ************************************ 00:06:05.207 END TEST non_locking_app_on_locked_coremask 00:06:05.207 ************************************ 00:06:05.207 14:46:50 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:05.207 14:46:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.207 14:46:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.207 14:46:50 -- common/autotest_common.sh@10 -- # set +x 00:06:05.207 ************************************ 00:06:05.207 START TEST locking_app_on_unlocked_coremask 00:06:05.207 ************************************ 00:06:05.207 14:46:50 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:06:05.207 14:46:50 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3657182 00:06:05.207 14:46:50 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:05.207 14:46:50 -- event/cpu_locks.sh@99 -- # waitforlisten 3657182 /var/tmp/spdk.sock 00:06:05.207 14:46:50 -- common/autotest_common.sh@817 -- # '[' -z 3657182 ']' 00:06:05.207 14:46:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.207 14:46:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:05.207 14:46:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.207 14:46:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:05.207 14:46:50 -- common/autotest_common.sh@10 -- # set +x 00:06:05.207 [2024-04-26 14:46:50.935582] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:05.207 [2024-04-26 14:46:50.935655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657182 ] 00:06:05.466 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.466 [2024-04-26 14:46:50.969867] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:05.466 [2024-04-26 14:46:50.996941] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.466 [2024-04-26 14:46:50.996967] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.466 [2024-04-26 14:46:51.083653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.724 14:46:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:05.724 14:46:51 -- common/autotest_common.sh@850 -- # return 0 00:06:05.724 14:46:51 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3657186 00:06:05.724 14:46:51 -- event/cpu_locks.sh@103 -- # waitforlisten 3657186 /var/tmp/spdk2.sock 00:06:05.724 14:46:51 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.724 14:46:51 -- common/autotest_common.sh@817 -- # '[' -z 3657186 ']' 00:06:05.724 14:46:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.724 14:46:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:05.724 14:46:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.724 14:46:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:05.724 14:46:51 -- common/autotest_common.sh@10 -- # set +x 00:06:05.724 [2024-04-26 14:46:51.391118] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:05.724 [2024-04-26 14:46:51.391224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657186 ] 00:06:05.724 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.724 [2024-04-26 14:46:51.426802] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:05.982 [2024-04-26 14:46:51.485207] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.982 [2024-04-26 14:46:51.667680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.917 14:46:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:06.917 14:46:52 -- common/autotest_common.sh@850 -- # return 0 00:06:06.917 14:46:52 -- event/cpu_locks.sh@105 -- # locks_exist 3657186 00:06:06.917 14:46:52 -- event/cpu_locks.sh@22 -- # lslocks -p 3657186 00:06:06.917 14:46:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.175 lslocks: write error 00:06:07.175 14:46:52 -- event/cpu_locks.sh@107 -- # killprocess 3657182 00:06:07.175 14:46:52 -- common/autotest_common.sh@936 -- # '[' -z 3657182 ']' 00:06:07.175 14:46:52 -- common/autotest_common.sh@940 -- # kill -0 3657182 00:06:07.175 14:46:52 -- common/autotest_common.sh@941 -- # uname 00:06:07.175 14:46:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:07.175 14:46:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3657182 00:06:07.433 14:46:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:07.433 14:46:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:07.433 14:46:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3657182' 00:06:07.433 killing process with pid 3657182 00:06:07.433 14:46:52 -- common/autotest_common.sh@955 -- # kill 3657182 00:06:07.433 14:46:52 -- common/autotest_common.sh@960 -- # wait 3657182 00:06:07.998 14:46:53 -- event/cpu_locks.sh@108 -- # killprocess 3657186 00:06:07.998 14:46:53 -- common/autotest_common.sh@936 -- # '[' -z 3657186 ']' 00:06:07.998 14:46:53 -- common/autotest_common.sh@940 -- # kill -0 3657186 00:06:07.998 14:46:53 -- common/autotest_common.sh@941 -- # uname 00:06:07.998 14:46:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:07.998 14:46:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3657186 00:06:08.256 14:46:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:08.256 14:46:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:08.256 14:46:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3657186' 00:06:08.256 killing process with pid 3657186 00:06:08.256 14:46:53 -- common/autotest_common.sh@955 -- # kill 3657186 00:06:08.256 14:46:53 -- common/autotest_common.sh@960 -- # wait 3657186 00:06:08.514 00:06:08.514 real 0m3.253s 00:06:08.514 user 0m3.390s 00:06:08.514 sys 0m1.073s 00:06:08.514 14:46:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:08.514 14:46:54 -- common/autotest_common.sh@10 -- # set +x 00:06:08.514 ************************************ 00:06:08.514 END TEST locking_app_on_unlocked_coremask 00:06:08.514 ************************************ 00:06:08.514 14:46:54 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:08.514 14:46:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.514 14:46:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.514 14:46:54 -- common/autotest_common.sh@10 -- # set +x 00:06:08.772 ************************************ 00:06:08.772 START TEST locking_app_on_locked_coremask 00:06:08.772 ************************************ 00:06:08.772 14:46:54 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:06:08.772 14:46:54 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3657621 00:06:08.772 14:46:54 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.772 14:46:54 -- event/cpu_locks.sh@116 -- # waitforlisten 3657621 /var/tmp/spdk.sock 00:06:08.772 14:46:54 -- common/autotest_common.sh@817 -- # '[' -z 3657621 ']' 00:06:08.772 14:46:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.772 14:46:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:08.772 14:46:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.772 14:46:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:08.772 14:46:54 -- common/autotest_common.sh@10 -- # set +x 00:06:08.772 [2024-04-26 14:46:54.318552] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:08.773 [2024-04-26 14:46:54.318647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657621 ] 00:06:08.773 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.773 [2024-04-26 14:46:54.351333] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:08.773 [2024-04-26 14:46:54.377217] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.773 [2024-04-26 14:46:54.463439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.030 14:46:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:09.030 14:46:54 -- common/autotest_common.sh@850 -- # return 0 00:06:09.030 14:46:54 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3657633 00:06:09.030 14:46:54 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:09.030 14:46:54 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3657633 /var/tmp/spdk2.sock 00:06:09.030 14:46:54 -- common/autotest_common.sh@638 -- # local es=0 00:06:09.030 14:46:54 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3657633 /var/tmp/spdk2.sock 00:06:09.030 14:46:54 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:09.030 14:46:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:09.030 14:46:54 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:09.030 14:46:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:09.030 14:46:54 -- common/autotest_common.sh@641 -- # waitforlisten 3657633 /var/tmp/spdk2.sock 00:06:09.030 14:46:54 -- common/autotest_common.sh@817 -- # '[' -z 3657633 ']' 00:06:09.030 14:46:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.030 14:46:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:09.030 14:46:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.030 14:46:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:09.030 14:46:54 -- common/autotest_common.sh@10 -- # set +x 00:06:09.288 [2024-04-26 14:46:54.772975] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:09.288 [2024-04-26 14:46:54.773103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657633 ] 00:06:09.288 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.288 [2024-04-26 14:46:54.813797] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.288 [2024-04-26 14:46:54.872254] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3657621 has claimed it. 00:06:09.288 [2024-04-26 14:46:54.872302] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3657633) - No such process 00:06:09.854 ERROR: process (pid: 3657633) is no longer running 00:06:09.854 14:46:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:09.854 14:46:55 -- common/autotest_common.sh@850 -- # return 1 00:06:09.854 14:46:55 -- common/autotest_common.sh@641 -- # es=1 00:06:09.854 14:46:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:09.854 14:46:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:09.854 14:46:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:09.854 14:46:55 -- event/cpu_locks.sh@122 -- # locks_exist 3657621 00:06:09.854 14:46:55 -- event/cpu_locks.sh@22 -- # lslocks -p 3657621 00:06:09.854 14:46:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.451 lslocks: write error 00:06:10.451 14:46:55 -- event/cpu_locks.sh@124 -- # killprocess 3657621 00:06:10.451 14:46:55 -- common/autotest_common.sh@936 -- # '[' -z 3657621 ']' 00:06:10.451 14:46:55 -- common/autotest_common.sh@940 -- # kill -0 3657621 00:06:10.451 14:46:55 -- common/autotest_common.sh@941 -- # uname 00:06:10.451 14:46:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:10.451 14:46:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3657621 00:06:10.451 14:46:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:10.451 14:46:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:10.451 14:46:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3657621' 00:06:10.451 killing process with pid 3657621 00:06:10.451 14:46:55 -- common/autotest_common.sh@955 -- # kill 3657621 00:06:10.451 14:46:55 -- common/autotest_common.sh@960 -- # wait 3657621 00:06:10.710 00:06:10.710 real 0m2.066s 00:06:10.710 user 0m2.222s 00:06:10.710 sys 0m0.660s 00:06:10.710 14:46:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.710 14:46:56 -- common/autotest_common.sh@10 -- # set +x 00:06:10.710 ************************************ 00:06:10.710 END TEST locking_app_on_locked_coremask 00:06:10.710 ************************************ 00:06:10.710 14:46:56 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:10.710 14:46:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.710 14:46:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.710 14:46:56 -- common/autotest_common.sh@10 -- # set +x 00:06:10.968 ************************************ 00:06:10.968 START TEST locking_overlapped_coremask 00:06:10.968 ************************************ 00:06:10.968 14:46:56 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:06:10.968 14:46:56 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3657931 00:06:10.968 14:46:56 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:10.968 14:46:56 -- event/cpu_locks.sh@133 -- # waitforlisten 3657931 /var/tmp/spdk.sock 00:06:10.968 14:46:56 -- common/autotest_common.sh@817 -- # '[' -z 3657931 ']' 00:06:10.968 14:46:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.968 14:46:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:10.968 14:46:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.968 14:46:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:10.968 14:46:56 -- common/autotest_common.sh@10 -- # set +x 00:06:10.968 [2024-04-26 14:46:56.505084] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:10.968 [2024-04-26 14:46:56.505159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657931 ] 00:06:10.968 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.968 [2024-04-26 14:46:56.536144] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:10.968 [2024-04-26 14:46:56.566602] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.968 [2024-04-26 14:46:56.657795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.968 [2024-04-26 14:46:56.657863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.968 [2024-04-26 14:46:56.657866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.226 14:46:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:11.226 14:46:56 -- common/autotest_common.sh@850 -- # return 0 00:06:11.226 14:46:56 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3657943 00:06:11.226 14:46:56 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3657943 /var/tmp/spdk2.sock 00:06:11.226 14:46:56 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:11.226 14:46:56 -- common/autotest_common.sh@638 -- # local es=0 00:06:11.226 14:46:56 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3657943 /var/tmp/spdk2.sock 00:06:11.226 14:46:56 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:11.226 14:46:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:11.226 14:46:56 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:11.226 14:46:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:11.226 14:46:56 -- common/autotest_common.sh@641 -- # waitforlisten 3657943 /var/tmp/spdk2.sock 00:06:11.226 14:46:56 -- common/autotest_common.sh@817 -- # '[' -z 3657943 ']' 00:06:11.226 14:46:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.226 14:46:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:11.226 14:46:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.226 14:46:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:11.226 14:46:56 -- common/autotest_common.sh@10 -- # set +x 00:06:11.226 [2024-04-26 14:46:56.954229] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:11.226 [2024-04-26 14:46:56.954317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657943 ] 00:06:11.484 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.484 [2024-04-26 14:46:56.989144] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:11.484 [2024-04-26 14:46:57.043765] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3657931 has claimed it. 00:06:11.484 [2024-04-26 14:46:57.043815] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:12.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3657943) - No such process 00:06:12.049 ERROR: process (pid: 3657943) is no longer running 00:06:12.049 14:46:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:12.049 14:46:57 -- common/autotest_common.sh@850 -- # return 1 00:06:12.049 14:46:57 -- common/autotest_common.sh@641 -- # es=1 00:06:12.049 14:46:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:12.049 14:46:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:12.049 14:46:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:12.049 14:46:57 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:12.049 14:46:57 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:12.049 14:46:57 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:12.049 14:46:57 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:12.049 14:46:57 -- event/cpu_locks.sh@141 -- # killprocess 3657931 00:06:12.049 14:46:57 -- common/autotest_common.sh@936 -- # '[' -z 3657931 ']' 00:06:12.049 14:46:57 -- common/autotest_common.sh@940 -- # kill -0 3657931 00:06:12.049 14:46:57 -- common/autotest_common.sh@941 -- # uname 00:06:12.049 14:46:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:12.049 14:46:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3657931 00:06:12.049 14:46:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:12.049 14:46:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:12.049 14:46:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3657931' 00:06:12.049 killing process with pid 3657931 00:06:12.049 14:46:57 -- common/autotest_common.sh@955 -- # kill 3657931 00:06:12.049 14:46:57 -- common/autotest_common.sh@960 -- # wait 3657931 00:06:12.615 00:06:12.615 real 0m1.616s 00:06:12.615 user 0m4.349s 00:06:12.615 sys 0m0.453s 00:06:12.615 14:46:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.615 14:46:58 -- common/autotest_common.sh@10 -- # set +x 00:06:12.615 ************************************ 00:06:12.615 END TEST locking_overlapped_coremask 00:06:12.615 ************************************ 00:06:12.615 14:46:58 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:12.615 14:46:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.615 14:46:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.615 14:46:58 -- common/autotest_common.sh@10 -- # set +x 00:06:12.615 ************************************ 00:06:12.615 START TEST locking_overlapped_coremask_via_rpc 00:06:12.615 ************************************ 00:06:12.615 14:46:58 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:06:12.615 14:46:58 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3658111 00:06:12.615 14:46:58 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:12.615 14:46:58 -- event/cpu_locks.sh@149 -- # waitforlisten 3658111 /var/tmp/spdk.sock 00:06:12.615 14:46:58 -- common/autotest_common.sh@817 -- # '[' -z 3658111 ']' 00:06:12.615 14:46:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.615 14:46:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:12.615 14:46:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.615 14:46:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:12.615 14:46:58 -- common/autotest_common.sh@10 -- # set +x 00:06:12.615 [2024-04-26 14:46:58.250775] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:12.615 [2024-04-26 14:46:58.250856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658111 ] 00:06:12.615 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.615 [2024-04-26 14:46:58.285488] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:12.615 [2024-04-26 14:46:58.313463] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.615 [2024-04-26 14:46:58.313491] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.873 [2024-04-26 14:46:58.402358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.873 [2024-04-26 14:46:58.402416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.873 [2024-04-26 14:46:58.402419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.131 14:46:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:13.131 14:46:58 -- common/autotest_common.sh@850 -- # return 0 00:06:13.131 14:46:58 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3658241 00:06:13.131 14:46:58 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:13.131 14:46:58 -- event/cpu_locks.sh@153 -- # waitforlisten 3658241 /var/tmp/spdk2.sock 00:06:13.131 14:46:58 -- common/autotest_common.sh@817 -- # '[' -z 3658241 ']' 00:06:13.131 14:46:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.131 14:46:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:13.131 14:46:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.131 14:46:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:13.131 14:46:58 -- common/autotest_common.sh@10 -- # set +x 00:06:13.131 [2024-04-26 14:46:58.682580] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:13.131 [2024-04-26 14:46:58.682676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658241 ] 00:06:13.131 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.131 [2024-04-26 14:46:58.718178] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:13.131 [2024-04-26 14:46:58.772688] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.131 [2024-04-26 14:46:58.772715] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.388 [2024-04-26 14:46:58.941988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.388 [2024-04-26 14:46:58.942054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:13.389 [2024-04-26 14:46:58.942056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.954 14:46:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:13.954 14:46:59 -- common/autotest_common.sh@850 -- # return 0 00:06:13.954 14:46:59 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:13.954 14:46:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.954 14:46:59 -- common/autotest_common.sh@10 -- # set +x 00:06:13.954 14:46:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.954 14:46:59 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.954 14:46:59 -- common/autotest_common.sh@638 -- # local es=0 00:06:13.954 14:46:59 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.954 14:46:59 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:06:13.954 14:46:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:13.954 14:46:59 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:06:13.954 14:46:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:13.954 14:46:59 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.954 14:46:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.954 14:46:59 -- common/autotest_common.sh@10 -- # set +x 00:06:13.954 [2024-04-26 14:46:59.635124] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3658111 has claimed it. 00:06:13.954 request: 00:06:13.954 { 00:06:13.954 "method": "framework_enable_cpumask_locks", 00:06:13.954 "req_id": 1 00:06:13.954 } 00:06:13.954 Got JSON-RPC error response 00:06:13.954 response: 00:06:13.954 { 00:06:13.954 "code": -32603, 00:06:13.954 "message": "Failed to claim CPU core: 2" 00:06:13.954 } 00:06:13.954 14:46:59 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:13.954 14:46:59 -- common/autotest_common.sh@641 -- # es=1 00:06:13.954 14:46:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:13.954 14:46:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:13.954 14:46:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:13.954 14:46:59 -- event/cpu_locks.sh@158 -- # waitforlisten 3658111 /var/tmp/spdk.sock 00:06:13.954 14:46:59 -- common/autotest_common.sh@817 -- # '[' -z 3658111 ']' 00:06:13.954 14:46:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.954 14:46:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:13.954 14:46:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.954 14:46:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:13.954 14:46:59 -- common/autotest_common.sh@10 -- # set +x 00:06:14.212 14:46:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:14.212 14:46:59 -- common/autotest_common.sh@850 -- # return 0 00:06:14.212 14:46:59 -- event/cpu_locks.sh@159 -- # waitforlisten 3658241 /var/tmp/spdk2.sock 00:06:14.212 14:46:59 -- common/autotest_common.sh@817 -- # '[' -z 3658241 ']' 00:06:14.212 14:46:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.212 14:46:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:14.212 14:46:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.212 14:46:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:14.212 14:46:59 -- common/autotest_common.sh@10 -- # set +x 00:06:14.469 14:47:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:14.469 14:47:00 -- common/autotest_common.sh@850 -- # return 0 00:06:14.469 14:47:00 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:14.469 14:47:00 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:14.469 14:47:00 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:14.469 14:47:00 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:14.469 00:06:14.469 real 0m1.944s 00:06:14.469 user 0m1.009s 00:06:14.469 sys 0m0.186s 00:06:14.469 14:47:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:14.469 14:47:00 -- common/autotest_common.sh@10 -- # set +x 00:06:14.469 ************************************ 00:06:14.469 END TEST locking_overlapped_coremask_via_rpc 00:06:14.469 ************************************ 00:06:14.469 14:47:00 -- event/cpu_locks.sh@174 -- # cleanup 00:06:14.469 14:47:00 -- event/cpu_locks.sh@15 -- # [[ -z 3658111 ]] 00:06:14.469 14:47:00 -- event/cpu_locks.sh@15 -- # killprocess 3658111 00:06:14.469 14:47:00 -- common/autotest_common.sh@936 -- # '[' -z 3658111 ']' 00:06:14.469 14:47:00 -- common/autotest_common.sh@940 -- # kill -0 3658111 00:06:14.469 14:47:00 -- common/autotest_common.sh@941 -- # uname 00:06:14.469 14:47:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.469 14:47:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3658111 00:06:14.469 14:47:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:14.469 14:47:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:14.469 14:47:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3658111' 00:06:14.469 killing process with pid 3658111 00:06:14.469 14:47:00 -- common/autotest_common.sh@955 -- # kill 3658111 00:06:14.469 14:47:00 -- common/autotest_common.sh@960 -- # wait 3658111 00:06:15.034 14:47:00 -- event/cpu_locks.sh@16 -- # [[ -z 3658241 ]] 00:06:15.034 14:47:00 -- event/cpu_locks.sh@16 -- # killprocess 3658241 00:06:15.034 14:47:00 -- common/autotest_common.sh@936 -- # '[' -z 3658241 ']' 00:06:15.034 14:47:00 -- common/autotest_common.sh@940 -- # kill -0 3658241 00:06:15.034 14:47:00 -- common/autotest_common.sh@941 -- # uname 00:06:15.034 14:47:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:15.034 14:47:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3658241 00:06:15.034 14:47:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:15.034 14:47:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:15.034 14:47:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3658241' 00:06:15.034 killing process with pid 3658241 00:06:15.034 14:47:00 -- common/autotest_common.sh@955 -- # kill 3658241 00:06:15.034 14:47:00 -- common/autotest_common.sh@960 -- # wait 3658241 00:06:15.293 14:47:01 -- event/cpu_locks.sh@18 -- # rm -f 00:06:15.293 14:47:01 -- event/cpu_locks.sh@1 -- # cleanup 00:06:15.293 14:47:01 -- event/cpu_locks.sh@15 -- # [[ -z 3658111 ]] 00:06:15.293 14:47:01 -- event/cpu_locks.sh@15 -- # killprocess 3658111 00:06:15.293 14:47:01 -- common/autotest_common.sh@936 -- # '[' -z 3658111 ']' 00:06:15.293 14:47:01 -- common/autotest_common.sh@940 -- # kill -0 3658111 00:06:15.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3658111) - No such process 00:06:15.293 14:47:01 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3658111 is not found' 00:06:15.293 Process with pid 3658111 is not found 00:06:15.293 14:47:01 -- event/cpu_locks.sh@16 -- # [[ -z 3658241 ]] 00:06:15.293 14:47:01 -- event/cpu_locks.sh@16 -- # killprocess 3658241 00:06:15.293 14:47:01 -- common/autotest_common.sh@936 -- # '[' -z 3658241 ']' 00:06:15.293 14:47:01 -- common/autotest_common.sh@940 -- # kill -0 3658241 00:06:15.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3658241) - No such process 00:06:15.293 14:47:01 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3658241 is not found' 00:06:15.293 Process with pid 3658241 is not found 00:06:15.293 14:47:01 -- event/cpu_locks.sh@18 -- # rm -f 00:06:15.293 00:06:15.293 real 0m16.146s 00:06:15.293 user 0m27.351s 00:06:15.293 sys 0m5.680s 00:06:15.293 14:47:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.293 14:47:01 -- common/autotest_common.sh@10 -- # set +x 00:06:15.293 ************************************ 00:06:15.293 END TEST cpu_locks 00:06:15.293 ************************************ 00:06:15.551 00:06:15.551 real 0m42.642s 00:06:15.552 user 1m19.970s 00:06:15.552 sys 0m9.988s 00:06:15.552 14:47:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.552 14:47:01 -- common/autotest_common.sh@10 -- # set +x 00:06:15.552 ************************************ 00:06:15.552 END TEST event 00:06:15.552 ************************************ 00:06:15.552 14:47:01 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:15.552 14:47:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.552 14:47:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.552 14:47:01 -- common/autotest_common.sh@10 -- # set +x 00:06:15.552 ************************************ 00:06:15.552 START TEST thread 00:06:15.552 ************************************ 00:06:15.552 14:47:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:15.552 * Looking for test storage... 00:06:15.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:15.552 14:47:01 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:15.552 14:47:01 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:15.552 14:47:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.552 14:47:01 -- common/autotest_common.sh@10 -- # set +x 00:06:15.810 ************************************ 00:06:15.810 START TEST thread_poller_perf 00:06:15.810 ************************************ 00:06:15.810 14:47:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:15.810 [2024-04-26 14:47:01.337583] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:15.810 [2024-04-26 14:47:01.337650] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658623 ] 00:06:15.810 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.810 [2024-04-26 14:47:01.370118] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.810 [2024-04-26 14:47:01.398068] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.810 [2024-04-26 14:47:01.487390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.810 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:17.183 ====================================== 00:06:17.183 busy:2713148744 (cyc) 00:06:17.183 total_run_count: 292000 00:06:17.183 tsc_hz: 2700000000 (cyc) 00:06:17.183 ====================================== 00:06:17.183 poller_cost: 9291 (cyc), 3441 (nsec) 00:06:17.183 00:06:17.183 real 0m1.256s 00:06:17.183 user 0m1.176s 00:06:17.183 sys 0m0.074s 00:06:17.183 14:47:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.183 14:47:02 -- common/autotest_common.sh@10 -- # set +x 00:06:17.183 ************************************ 00:06:17.183 END TEST thread_poller_perf 00:06:17.183 ************************************ 00:06:17.183 14:47:02 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:17.183 14:47:02 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:17.183 14:47:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.183 14:47:02 -- common/autotest_common.sh@10 -- # set +x 00:06:17.183 ************************************ 00:06:17.183 START TEST thread_poller_perf 00:06:17.183 ************************************ 00:06:17.183 14:47:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:17.183 [2024-04-26 14:47:02.712652] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:17.183 [2024-04-26 14:47:02.712715] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658788 ] 00:06:17.183 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.183 [2024-04-26 14:47:02.745178] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:17.183 [2024-04-26 14:47:02.777654] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.183 [2024-04-26 14:47:02.865631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.183 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:18.554 ====================================== 00:06:18.554 busy:2702782244 (cyc) 00:06:18.554 total_run_count: 3834000 00:06:18.554 tsc_hz: 2700000000 (cyc) 00:06:18.554 ====================================== 00:06:18.554 poller_cost: 704 (cyc), 260 (nsec) 00:06:18.554 00:06:18.554 real 0m1.246s 00:06:18.554 user 0m1.159s 00:06:18.554 sys 0m0.082s 00:06:18.554 14:47:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:18.554 14:47:03 -- common/autotest_common.sh@10 -- # set +x 00:06:18.554 ************************************ 00:06:18.554 END TEST thread_poller_perf 00:06:18.554 ************************************ 00:06:18.554 14:47:03 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:18.554 00:06:18.554 real 0m2.793s 00:06:18.554 user 0m2.447s 00:06:18.554 sys 0m0.324s 00:06:18.554 14:47:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:18.554 14:47:03 -- common/autotest_common.sh@10 -- # set +x 00:06:18.554 ************************************ 00:06:18.554 END TEST thread 00:06:18.554 ************************************ 00:06:18.554 14:47:03 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:18.554 14:47:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:18.554 14:47:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.554 14:47:03 -- common/autotest_common.sh@10 -- # set +x 00:06:18.554 ************************************ 00:06:18.554 START TEST accel 00:06:18.554 ************************************ 00:06:18.554 14:47:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:18.554 * Looking for test storage... 00:06:18.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:18.554 14:47:04 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:18.554 14:47:04 -- accel/accel.sh@82 -- # get_expected_opcs 00:06:18.554 14:47:04 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:18.554 14:47:04 -- accel/accel.sh@62 -- # spdk_tgt_pid=3659105 00:06:18.554 14:47:04 -- accel/accel.sh@63 -- # waitforlisten 3659105 00:06:18.554 14:47:04 -- common/autotest_common.sh@817 -- # '[' -z 3659105 ']' 00:06:18.554 14:47:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.554 14:47:04 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:18.554 14:47:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:18.554 14:47:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.554 14:47:04 -- accel/accel.sh@61 -- # build_accel_config 00:06:18.554 14:47:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:18.554 14:47:04 -- common/autotest_common.sh@10 -- # set +x 00:06:18.554 14:47:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.554 14:47:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.554 14:47:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.554 14:47:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.554 14:47:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.554 14:47:04 -- accel/accel.sh@40 -- # local IFS=, 00:06:18.554 14:47:04 -- accel/accel.sh@41 -- # jq -r . 00:06:18.554 [2024-04-26 14:47:04.189692] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:18.554 [2024-04-26 14:47:04.189788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3659105 ] 00:06:18.554 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.554 [2024-04-26 14:47:04.222721] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:18.554 [2024-04-26 14:47:04.249434] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.812 [2024-04-26 14:47:04.334862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.069 14:47:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:19.069 14:47:04 -- common/autotest_common.sh@850 -- # return 0 00:06:19.070 14:47:04 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:19.070 14:47:04 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:19.070 14:47:04 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:19.070 14:47:04 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:19.070 14:47:04 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:19.070 14:47:04 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:19.070 14:47:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.070 14:47:04 -- common/autotest_common.sh@10 -- # set +x 00:06:19.070 14:47:04 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:19.070 14:47:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.070 14:47:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # IFS== 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:19.070 14:47:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.070 14:47:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # IFS== 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:19.070 14:47:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.070 14:47:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # IFS== 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:19.070 14:47:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.070 14:47:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # IFS== 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:19.070 14:47:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.070 14:47:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # IFS== 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:19.070 14:47:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.070 14:47:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # IFS== 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:19.070 14:47:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.070 14:47:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # IFS== 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:19.070 14:47:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.070 14:47:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # IFS== 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:19.070 14:47:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.070 14:47:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # IFS== 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:19.070 14:47:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.070 14:47:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # IFS== 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:19.070 14:47:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.070 14:47:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # IFS== 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:19.070 14:47:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.070 14:47:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # IFS== 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:19.070 14:47:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.070 14:47:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # IFS== 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:19.070 14:47:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.070 14:47:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # IFS== 00:06:19.070 14:47:04 -- accel/accel.sh@72 -- # read -r opc module 00:06:19.070 14:47:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.070 14:47:04 -- accel/accel.sh@75 -- # killprocess 3659105 00:06:19.070 14:47:04 -- common/autotest_common.sh@936 -- # '[' -z 3659105 ']' 00:06:19.070 14:47:04 -- common/autotest_common.sh@940 -- # kill -0 3659105 00:06:19.070 14:47:04 -- common/autotest_common.sh@941 -- # uname 00:06:19.070 14:47:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:19.070 14:47:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3659105 00:06:19.070 14:47:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:19.070 14:47:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:19.070 14:47:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3659105' 00:06:19.070 killing process with pid 3659105 00:06:19.070 14:47:04 -- common/autotest_common.sh@955 -- # kill 3659105 00:06:19.070 14:47:04 -- common/autotest_common.sh@960 -- # wait 3659105 00:06:19.636 14:47:05 -- accel/accel.sh@76 -- # trap - ERR 00:06:19.637 14:47:05 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:19.637 14:47:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:19.637 14:47:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.637 14:47:05 -- common/autotest_common.sh@10 -- # set +x 00:06:19.637 14:47:05 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:06:19.637 14:47:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:19.637 14:47:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.637 14:47:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.637 14:47:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.637 14:47:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.637 14:47:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.637 14:47:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.637 14:47:05 -- accel/accel.sh@40 -- # local IFS=, 00:06:19.637 14:47:05 -- accel/accel.sh@41 -- # jq -r . 00:06:19.637 14:47:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.637 14:47:05 -- common/autotest_common.sh@10 -- # set +x 00:06:19.637 14:47:05 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:19.637 14:47:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:19.637 14:47:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.637 14:47:05 -- common/autotest_common.sh@10 -- # set +x 00:06:19.637 ************************************ 00:06:19.637 START TEST accel_missing_filename 00:06:19.637 ************************************ 00:06:19.637 14:47:05 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:06:19.637 14:47:05 -- common/autotest_common.sh@638 -- # local es=0 00:06:19.637 14:47:05 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:19.637 14:47:05 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:19.637 14:47:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:19.637 14:47:05 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:19.637 14:47:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:19.637 14:47:05 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:06:19.637 14:47:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:19.637 14:47:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.637 14:47:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.637 14:47:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.637 14:47:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.637 14:47:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.637 14:47:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.637 14:47:05 -- accel/accel.sh@40 -- # local IFS=, 00:06:19.637 14:47:05 -- accel/accel.sh@41 -- # jq -r . 00:06:19.637 [2024-04-26 14:47:05.328935] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:19.637 [2024-04-26 14:47:05.328998] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3659285 ] 00:06:19.637 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.637 [2024-04-26 14:47:05.361337] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:19.895 [2024-04-26 14:47:05.393848] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.895 [2024-04-26 14:47:05.481487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.895 [2024-04-26 14:47:05.543055] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.895 [2024-04-26 14:47:05.628676] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:20.153 A filename is required. 00:06:20.153 14:47:05 -- common/autotest_common.sh@641 -- # es=234 00:06:20.153 14:47:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:20.153 14:47:05 -- common/autotest_common.sh@650 -- # es=106 00:06:20.153 14:47:05 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:20.153 14:47:05 -- common/autotest_common.sh@658 -- # es=1 00:06:20.153 14:47:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:20.153 00:06:20.153 real 0m0.398s 00:06:20.153 user 0m0.280s 00:06:20.153 sys 0m0.151s 00:06:20.153 14:47:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.153 14:47:05 -- common/autotest_common.sh@10 -- # set +x 00:06:20.153 ************************************ 00:06:20.153 END TEST accel_missing_filename 00:06:20.153 ************************************ 00:06:20.153 14:47:05 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:20.153 14:47:05 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:20.153 14:47:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.153 14:47:05 -- common/autotest_common.sh@10 -- # set +x 00:06:20.153 ************************************ 00:06:20.153 START TEST accel_compress_verify 00:06:20.153 ************************************ 00:06:20.153 14:47:05 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:20.153 14:47:05 -- common/autotest_common.sh@638 -- # local es=0 00:06:20.153 14:47:05 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:20.153 14:47:05 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:20.153 14:47:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:20.153 14:47:05 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:20.153 14:47:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:20.154 14:47:05 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:20.154 14:47:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:20.154 14:47:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.154 14:47:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.154 14:47:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.154 14:47:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.154 14:47:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.154 14:47:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.154 14:47:05 -- accel/accel.sh@40 -- # local IFS=, 00:06:20.154 14:47:05 -- accel/accel.sh@41 -- # jq -r . 00:06:20.154 [2024-04-26 14:47:05.845108] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:20.154 [2024-04-26 14:47:05.845173] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3659329 ] 00:06:20.154 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.154 [2024-04-26 14:47:05.878243] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:20.412 [2024-04-26 14:47:05.908674] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.412 [2024-04-26 14:47:06.000775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.412 [2024-04-26 14:47:06.063890] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:20.412 [2024-04-26 14:47:06.147758] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:20.670 00:06:20.670 Compression does not support the verify option, aborting. 00:06:20.670 14:47:06 -- common/autotest_common.sh@641 -- # es=161 00:06:20.670 14:47:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:20.670 14:47:06 -- common/autotest_common.sh@650 -- # es=33 00:06:20.670 14:47:06 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:20.670 14:47:06 -- common/autotest_common.sh@658 -- # es=1 00:06:20.670 14:47:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:20.670 00:06:20.670 real 0m0.399s 00:06:20.670 user 0m0.283s 00:06:20.670 sys 0m0.151s 00:06:20.670 14:47:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.670 14:47:06 -- common/autotest_common.sh@10 -- # set +x 00:06:20.670 ************************************ 00:06:20.670 END TEST accel_compress_verify 00:06:20.670 ************************************ 00:06:20.670 14:47:06 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:20.670 14:47:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:20.670 14:47:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.670 14:47:06 -- common/autotest_common.sh@10 -- # set +x 00:06:20.670 ************************************ 00:06:20.670 START TEST accel_wrong_workload 00:06:20.670 ************************************ 00:06:20.670 14:47:06 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:06:20.670 14:47:06 -- common/autotest_common.sh@638 -- # local es=0 00:06:20.670 14:47:06 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:20.670 14:47:06 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:20.670 14:47:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:20.670 14:47:06 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:20.670 14:47:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:20.670 14:47:06 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:06:20.670 14:47:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:20.670 14:47:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.670 14:47:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.670 14:47:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.670 14:47:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.670 14:47:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.670 14:47:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.670 14:47:06 -- accel/accel.sh@40 -- # local IFS=, 00:06:20.670 14:47:06 -- accel/accel.sh@41 -- # jq -r . 00:06:20.670 Unsupported workload type: foobar 00:06:20.670 [2024-04-26 14:47:06.366379] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:20.670 accel_perf options: 00:06:20.670 [-h help message] 00:06:20.670 [-q queue depth per core] 00:06:20.670 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:20.670 [-T number of threads per core 00:06:20.670 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:20.670 [-t time in seconds] 00:06:20.670 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:20.670 [ dif_verify, , dif_generate, dif_generate_copy 00:06:20.670 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:20.670 [-l for compress/decompress workloads, name of uncompressed input file 00:06:20.670 [-S for crc32c workload, use this seed value (default 0) 00:06:20.670 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:20.670 [-f for fill workload, use this BYTE value (default 255) 00:06:20.670 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:20.670 [-y verify result if this switch is on] 00:06:20.670 [-a tasks to allocate per core (default: same value as -q)] 00:06:20.670 Can be used to spread operations across a wider range of memory. 00:06:20.670 14:47:06 -- common/autotest_common.sh@641 -- # es=1 00:06:20.670 14:47:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:20.670 14:47:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:20.670 14:47:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:20.670 00:06:20.670 real 0m0.023s 00:06:20.670 user 0m0.011s 00:06:20.670 sys 0m0.012s 00:06:20.670 14:47:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.670 14:47:06 -- common/autotest_common.sh@10 -- # set +x 00:06:20.670 ************************************ 00:06:20.670 END TEST accel_wrong_workload 00:06:20.670 ************************************ 00:06:20.670 Error: writing output failed: Broken pipe 00:06:20.670 14:47:06 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:20.670 14:47:06 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:20.670 14:47:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.670 14:47:06 -- common/autotest_common.sh@10 -- # set +x 00:06:20.928 ************************************ 00:06:20.928 START TEST accel_negative_buffers 00:06:20.928 ************************************ 00:06:20.928 14:47:06 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:20.928 14:47:06 -- common/autotest_common.sh@638 -- # local es=0 00:06:20.928 14:47:06 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:20.928 14:47:06 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:20.928 14:47:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:20.928 14:47:06 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:20.928 14:47:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:20.928 14:47:06 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:06:20.928 14:47:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:20.928 14:47:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.928 14:47:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.928 14:47:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.928 14:47:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.928 14:47:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.928 14:47:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.928 14:47:06 -- accel/accel.sh@40 -- # local IFS=, 00:06:20.928 14:47:06 -- accel/accel.sh@41 -- # jq -r . 00:06:20.928 -x option must be non-negative. 00:06:20.928 [2024-04-26 14:47:06.504189] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:20.928 accel_perf options: 00:06:20.928 [-h help message] 00:06:20.928 [-q queue depth per core] 00:06:20.928 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:20.928 [-T number of threads per core 00:06:20.928 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:20.928 [-t time in seconds] 00:06:20.928 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:20.928 [ dif_verify, , dif_generate, dif_generate_copy 00:06:20.928 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:20.928 [-l for compress/decompress workloads, name of uncompressed input file 00:06:20.928 [-S for crc32c workload, use this seed value (default 0) 00:06:20.928 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:20.928 [-f for fill workload, use this BYTE value (default 255) 00:06:20.928 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:20.928 [-y verify result if this switch is on] 00:06:20.928 [-a tasks to allocate per core (default: same value as -q)] 00:06:20.928 Can be used to spread operations across a wider range of memory. 00:06:20.928 14:47:06 -- common/autotest_common.sh@641 -- # es=1 00:06:20.928 14:47:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:20.928 14:47:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:20.928 14:47:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:20.928 00:06:20.928 real 0m0.022s 00:06:20.928 user 0m0.012s 00:06:20.928 sys 0m0.010s 00:06:20.928 14:47:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.928 14:47:06 -- common/autotest_common.sh@10 -- # set +x 00:06:20.928 ************************************ 00:06:20.928 END TEST accel_negative_buffers 00:06:20.928 ************************************ 00:06:20.928 Error: writing output failed: Broken pipe 00:06:20.928 14:47:06 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:20.928 14:47:06 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:20.928 14:47:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.928 14:47:06 -- common/autotest_common.sh@10 -- # set +x 00:06:20.928 ************************************ 00:06:20.928 START TEST accel_crc32c 00:06:20.928 ************************************ 00:06:20.928 14:47:06 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:20.928 14:47:06 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.928 14:47:06 -- accel/accel.sh@17 -- # local accel_module 00:06:20.928 14:47:06 -- accel/accel.sh@19 -- # IFS=: 00:06:20.928 14:47:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:20.928 14:47:06 -- accel/accel.sh@19 -- # read -r var val 00:06:20.928 14:47:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:20.928 14:47:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.928 14:47:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.928 14:47:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.928 14:47:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.928 14:47:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.928 14:47:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.928 14:47:06 -- accel/accel.sh@40 -- # local IFS=, 00:06:20.928 14:47:06 -- accel/accel.sh@41 -- # jq -r . 00:06:20.928 [2024-04-26 14:47:06.635701] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:20.928 [2024-04-26 14:47:06.635763] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3659535 ] 00:06:20.928 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.186 [2024-04-26 14:47:06.669118] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.186 [2024-04-26 14:47:06.699181] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.186 [2024-04-26 14:47:06.790789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.186 14:47:06 -- accel/accel.sh@20 -- # val= 00:06:21.186 14:47:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # IFS=: 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # read -r var val 00:06:21.186 14:47:06 -- accel/accel.sh@20 -- # val= 00:06:21.186 14:47:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # IFS=: 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # read -r var val 00:06:21.186 14:47:06 -- accel/accel.sh@20 -- # val=0x1 00:06:21.186 14:47:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # IFS=: 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # read -r var val 00:06:21.186 14:47:06 -- accel/accel.sh@20 -- # val= 00:06:21.186 14:47:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # IFS=: 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # read -r var val 00:06:21.186 14:47:06 -- accel/accel.sh@20 -- # val= 00:06:21.186 14:47:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # IFS=: 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # read -r var val 00:06:21.186 14:47:06 -- accel/accel.sh@20 -- # val=crc32c 00:06:21.186 14:47:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.186 14:47:06 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # IFS=: 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # read -r var val 00:06:21.186 14:47:06 -- accel/accel.sh@20 -- # val=32 00:06:21.186 14:47:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # IFS=: 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # read -r var val 00:06:21.186 14:47:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.186 14:47:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # IFS=: 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # read -r var val 00:06:21.186 14:47:06 -- accel/accel.sh@20 -- # val= 00:06:21.186 14:47:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # IFS=: 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # read -r var val 00:06:21.186 14:47:06 -- accel/accel.sh@20 -- # val=software 00:06:21.186 14:47:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.186 14:47:06 -- accel/accel.sh@22 -- # accel_module=software 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # IFS=: 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # read -r var val 00:06:21.186 14:47:06 -- accel/accel.sh@20 -- # val=32 00:06:21.186 14:47:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # IFS=: 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # read -r var val 00:06:21.186 14:47:06 -- accel/accel.sh@20 -- # val=32 00:06:21.186 14:47:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # IFS=: 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # read -r var val 00:06:21.186 14:47:06 -- accel/accel.sh@20 -- # val=1 00:06:21.186 14:47:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # IFS=: 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # read -r var val 00:06:21.186 14:47:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.186 14:47:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # IFS=: 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # read -r var val 00:06:21.186 14:47:06 -- accel/accel.sh@20 -- # val=Yes 00:06:21.186 14:47:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # IFS=: 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # read -r var val 00:06:21.186 14:47:06 -- accel/accel.sh@20 -- # val= 00:06:21.186 14:47:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # IFS=: 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # read -r var val 00:06:21.186 14:47:06 -- accel/accel.sh@20 -- # val= 00:06:21.186 14:47:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # IFS=: 00:06:21.186 14:47:06 -- accel/accel.sh@19 -- # read -r var val 00:06:22.558 14:47:08 -- accel/accel.sh@20 -- # val= 00:06:22.558 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.558 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.558 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.558 14:47:08 -- accel/accel.sh@20 -- # val= 00:06:22.558 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.558 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.558 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.558 14:47:08 -- accel/accel.sh@20 -- # val= 00:06:22.558 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.558 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.558 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.558 14:47:08 -- accel/accel.sh@20 -- # val= 00:06:22.558 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.558 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.558 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.558 14:47:08 -- accel/accel.sh@20 -- # val= 00:06:22.558 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.558 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.558 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.558 14:47:08 -- accel/accel.sh@20 -- # val= 00:06:22.558 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.558 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.558 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.558 14:47:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.558 14:47:08 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:22.558 14:47:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.558 00:06:22.558 real 0m1.407s 00:06:22.558 user 0m1.266s 00:06:22.558 sys 0m0.142s 00:06:22.558 14:47:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:22.558 14:47:08 -- common/autotest_common.sh@10 -- # set +x 00:06:22.558 ************************************ 00:06:22.558 END TEST accel_crc32c 00:06:22.558 ************************************ 00:06:22.558 14:47:08 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:22.558 14:47:08 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:22.558 14:47:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.558 14:47:08 -- common/autotest_common.sh@10 -- # set +x 00:06:22.558 ************************************ 00:06:22.558 START TEST accel_crc32c_C2 00:06:22.558 ************************************ 00:06:22.558 14:47:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:22.558 14:47:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.558 14:47:08 -- accel/accel.sh@17 -- # local accel_module 00:06:22.558 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.558 14:47:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:22.558 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.558 14:47:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:22.558 14:47:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.558 14:47:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.558 14:47:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.558 14:47:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.558 14:47:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.558 14:47:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.558 14:47:08 -- accel/accel.sh@40 -- # local IFS=, 00:06:22.558 14:47:08 -- accel/accel.sh@41 -- # jq -r . 00:06:22.558 [2024-04-26 14:47:08.160611] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:22.558 [2024-04-26 14:47:08.160674] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3659697 ] 00:06:22.558 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.558 [2024-04-26 14:47:08.193951] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:22.558 [2024-04-26 14:47:08.224147] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.816 [2024-04-26 14:47:08.316533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.816 14:47:08 -- accel/accel.sh@20 -- # val= 00:06:22.816 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.816 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.816 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.816 14:47:08 -- accel/accel.sh@20 -- # val= 00:06:22.816 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.816 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.817 14:47:08 -- accel/accel.sh@20 -- # val=0x1 00:06:22.817 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.817 14:47:08 -- accel/accel.sh@20 -- # val= 00:06:22.817 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.817 14:47:08 -- accel/accel.sh@20 -- # val= 00:06:22.817 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.817 14:47:08 -- accel/accel.sh@20 -- # val=crc32c 00:06:22.817 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.817 14:47:08 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.817 14:47:08 -- accel/accel.sh@20 -- # val=0 00:06:22.817 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.817 14:47:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.817 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.817 14:47:08 -- accel/accel.sh@20 -- # val= 00:06:22.817 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.817 14:47:08 -- accel/accel.sh@20 -- # val=software 00:06:22.817 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.817 14:47:08 -- accel/accel.sh@22 -- # accel_module=software 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.817 14:47:08 -- accel/accel.sh@20 -- # val=32 00:06:22.817 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.817 14:47:08 -- accel/accel.sh@20 -- # val=32 00:06:22.817 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.817 14:47:08 -- accel/accel.sh@20 -- # val=1 00:06:22.817 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.817 14:47:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.817 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.817 14:47:08 -- accel/accel.sh@20 -- # val=Yes 00:06:22.817 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.817 14:47:08 -- accel/accel.sh@20 -- # val= 00:06:22.817 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:22.817 14:47:08 -- accel/accel.sh@20 -- # val= 00:06:22.817 14:47:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # IFS=: 00:06:22.817 14:47:08 -- accel/accel.sh@19 -- # read -r var val 00:06:24.195 14:47:09 -- accel/accel.sh@20 -- # val= 00:06:24.195 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.195 14:47:09 -- accel/accel.sh@20 -- # val= 00:06:24.195 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.195 14:47:09 -- accel/accel.sh@20 -- # val= 00:06:24.195 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.195 14:47:09 -- accel/accel.sh@20 -- # val= 00:06:24.195 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.195 14:47:09 -- accel/accel.sh@20 -- # val= 00:06:24.195 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.195 14:47:09 -- accel/accel.sh@20 -- # val= 00:06:24.195 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.195 14:47:09 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.195 14:47:09 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:24.195 14:47:09 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.195 00:06:24.195 real 0m1.413s 00:06:24.195 user 0m1.263s 00:06:24.195 sys 0m0.151s 00:06:24.195 14:47:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:24.195 14:47:09 -- common/autotest_common.sh@10 -- # set +x 00:06:24.195 ************************************ 00:06:24.195 END TEST accel_crc32c_C2 00:06:24.195 ************************************ 00:06:24.195 14:47:09 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:24.195 14:47:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:24.195 14:47:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.195 14:47:09 -- common/autotest_common.sh@10 -- # set +x 00:06:24.195 ************************************ 00:06:24.195 START TEST accel_copy 00:06:24.195 ************************************ 00:06:24.195 14:47:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:06:24.195 14:47:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.195 14:47:09 -- accel/accel.sh@17 -- # local accel_module 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.195 14:47:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.195 14:47:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:24.195 14:47:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.195 14:47:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.195 14:47:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.195 14:47:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.195 14:47:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.195 14:47:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.195 14:47:09 -- accel/accel.sh@40 -- # local IFS=, 00:06:24.195 14:47:09 -- accel/accel.sh@41 -- # jq -r . 00:06:24.195 [2024-04-26 14:47:09.694500] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:24.195 [2024-04-26 14:47:09.694566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3659978 ] 00:06:24.195 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.195 [2024-04-26 14:47:09.727599] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:24.195 [2024-04-26 14:47:09.757533] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.195 [2024-04-26 14:47:09.849075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.195 14:47:09 -- accel/accel.sh@20 -- # val= 00:06:24.195 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.195 14:47:09 -- accel/accel.sh@20 -- # val= 00:06:24.195 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.195 14:47:09 -- accel/accel.sh@20 -- # val=0x1 00:06:24.195 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.195 14:47:09 -- accel/accel.sh@20 -- # val= 00:06:24.195 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.195 14:47:09 -- accel/accel.sh@20 -- # val= 00:06:24.195 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.195 14:47:09 -- accel/accel.sh@20 -- # val=copy 00:06:24.195 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.195 14:47:09 -- accel/accel.sh@23 -- # accel_opc=copy 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.195 14:47:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.195 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.195 14:47:09 -- accel/accel.sh@20 -- # val= 00:06:24.195 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.195 14:47:09 -- accel/accel.sh@20 -- # val=software 00:06:24.195 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.195 14:47:09 -- accel/accel.sh@22 -- # accel_module=software 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.195 14:47:09 -- accel/accel.sh@20 -- # val=32 00:06:24.195 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.195 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.196 14:47:09 -- accel/accel.sh@20 -- # val=32 00:06:24.196 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.196 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.196 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.196 14:47:09 -- accel/accel.sh@20 -- # val=1 00:06:24.196 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.196 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.196 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.196 14:47:09 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.196 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.196 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.196 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.196 14:47:09 -- accel/accel.sh@20 -- # val=Yes 00:06:24.196 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.196 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.196 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.196 14:47:09 -- accel/accel.sh@20 -- # val= 00:06:24.196 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.196 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.196 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:24.196 14:47:09 -- accel/accel.sh@20 -- # val= 00:06:24.196 14:47:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.196 14:47:09 -- accel/accel.sh@19 -- # IFS=: 00:06:24.196 14:47:09 -- accel/accel.sh@19 -- # read -r var val 00:06:25.594 14:47:11 -- accel/accel.sh@20 -- # val= 00:06:25.594 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.594 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.594 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.594 14:47:11 -- accel/accel.sh@20 -- # val= 00:06:25.594 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.594 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.594 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.594 14:47:11 -- accel/accel.sh@20 -- # val= 00:06:25.594 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.594 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.594 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.594 14:47:11 -- accel/accel.sh@20 -- # val= 00:06:25.594 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.594 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.594 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.594 14:47:11 -- accel/accel.sh@20 -- # val= 00:06:25.594 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.594 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.594 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.594 14:47:11 -- accel/accel.sh@20 -- # val= 00:06:25.594 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.594 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.594 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.594 14:47:11 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.594 14:47:11 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:25.595 14:47:11 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.595 00:06:25.595 real 0m1.411s 00:06:25.595 user 0m1.269s 00:06:25.595 sys 0m0.143s 00:06:25.595 14:47:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.595 14:47:11 -- common/autotest_common.sh@10 -- # set +x 00:06:25.595 ************************************ 00:06:25.595 END TEST accel_copy 00:06:25.595 ************************************ 00:06:25.595 14:47:11 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:25.595 14:47:11 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:25.595 14:47:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.595 14:47:11 -- common/autotest_common.sh@10 -- # set +x 00:06:25.595 ************************************ 00:06:25.595 START TEST accel_fill 00:06:25.595 ************************************ 00:06:25.595 14:47:11 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:25.595 14:47:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.595 14:47:11 -- accel/accel.sh@17 -- # local accel_module 00:06:25.595 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.595 14:47:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:25.595 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.595 14:47:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:25.595 14:47:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.595 14:47:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.595 14:47:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.595 14:47:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.595 14:47:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.595 14:47:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.595 14:47:11 -- accel/accel.sh@40 -- # local IFS=, 00:06:25.595 14:47:11 -- accel/accel.sh@41 -- # jq -r . 00:06:25.595 [2024-04-26 14:47:11.226596] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:25.595 [2024-04-26 14:47:11.226659] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660144 ] 00:06:25.595 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.595 [2024-04-26 14:47:11.259521] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:25.595 [2024-04-26 14:47:11.289409] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.853 [2024-04-26 14:47:11.381586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.853 14:47:11 -- accel/accel.sh@20 -- # val= 00:06:25.853 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.853 14:47:11 -- accel/accel.sh@20 -- # val= 00:06:25.853 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.853 14:47:11 -- accel/accel.sh@20 -- # val=0x1 00:06:25.853 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.853 14:47:11 -- accel/accel.sh@20 -- # val= 00:06:25.853 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.853 14:47:11 -- accel/accel.sh@20 -- # val= 00:06:25.853 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.853 14:47:11 -- accel/accel.sh@20 -- # val=fill 00:06:25.853 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.853 14:47:11 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.853 14:47:11 -- accel/accel.sh@20 -- # val=0x80 00:06:25.853 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.853 14:47:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.853 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.853 14:47:11 -- accel/accel.sh@20 -- # val= 00:06:25.853 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.853 14:47:11 -- accel/accel.sh@20 -- # val=software 00:06:25.853 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.853 14:47:11 -- accel/accel.sh@22 -- # accel_module=software 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.853 14:47:11 -- accel/accel.sh@20 -- # val=64 00:06:25.853 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.853 14:47:11 -- accel/accel.sh@20 -- # val=64 00:06:25.853 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.853 14:47:11 -- accel/accel.sh@20 -- # val=1 00:06:25.853 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.853 14:47:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.853 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.853 14:47:11 -- accel/accel.sh@20 -- # val=Yes 00:06:25.853 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.853 14:47:11 -- accel/accel.sh@20 -- # val= 00:06:25.853 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:25.853 14:47:11 -- accel/accel.sh@20 -- # val= 00:06:25.853 14:47:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # IFS=: 00:06:25.853 14:47:11 -- accel/accel.sh@19 -- # read -r var val 00:06:27.229 14:47:12 -- accel/accel.sh@20 -- # val= 00:06:27.229 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.229 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.229 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.229 14:47:12 -- accel/accel.sh@20 -- # val= 00:06:27.229 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.229 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.229 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.229 14:47:12 -- accel/accel.sh@20 -- # val= 00:06:27.229 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.229 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.229 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.229 14:47:12 -- accel/accel.sh@20 -- # val= 00:06:27.229 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.229 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.229 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.229 14:47:12 -- accel/accel.sh@20 -- # val= 00:06:27.229 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.229 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.229 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.229 14:47:12 -- accel/accel.sh@20 -- # val= 00:06:27.229 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.229 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.229 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.229 14:47:12 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.229 14:47:12 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:27.229 14:47:12 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.229 00:06:27.229 real 0m1.411s 00:06:27.229 user 0m1.271s 00:06:27.229 sys 0m0.142s 00:06:27.229 14:47:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:27.229 14:47:12 -- common/autotest_common.sh@10 -- # set +x 00:06:27.229 ************************************ 00:06:27.229 END TEST accel_fill 00:06:27.229 ************************************ 00:06:27.229 14:47:12 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:27.229 14:47:12 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:27.229 14:47:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.229 14:47:12 -- common/autotest_common.sh@10 -- # set +x 00:06:27.229 ************************************ 00:06:27.229 START TEST accel_copy_crc32c 00:06:27.229 ************************************ 00:06:27.229 14:47:12 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:06:27.229 14:47:12 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.229 14:47:12 -- accel/accel.sh@17 -- # local accel_module 00:06:27.229 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.229 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.229 14:47:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:27.229 14:47:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:27.229 14:47:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.229 14:47:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.229 14:47:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.229 14:47:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.229 14:47:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.229 14:47:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.229 14:47:12 -- accel/accel.sh@40 -- # local IFS=, 00:06:27.229 14:47:12 -- accel/accel.sh@41 -- # jq -r . 00:06:27.229 [2024-04-26 14:47:12.757833] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:27.229 [2024-04-26 14:47:12.757893] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660313 ] 00:06:27.229 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.229 [2024-04-26 14:47:12.791263] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:27.229 [2024-04-26 14:47:12.821300] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.229 [2024-04-26 14:47:12.912743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.488 14:47:12 -- accel/accel.sh@20 -- # val= 00:06:27.488 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.488 14:47:12 -- accel/accel.sh@20 -- # val= 00:06:27.488 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.488 14:47:12 -- accel/accel.sh@20 -- # val=0x1 00:06:27.488 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.488 14:47:12 -- accel/accel.sh@20 -- # val= 00:06:27.488 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.488 14:47:12 -- accel/accel.sh@20 -- # val= 00:06:27.488 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.488 14:47:12 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:27.488 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.488 14:47:12 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.488 14:47:12 -- accel/accel.sh@20 -- # val=0 00:06:27.488 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.488 14:47:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.488 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.488 14:47:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.488 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.488 14:47:12 -- accel/accel.sh@20 -- # val= 00:06:27.488 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.488 14:47:12 -- accel/accel.sh@20 -- # val=software 00:06:27.488 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.488 14:47:12 -- accel/accel.sh@22 -- # accel_module=software 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.488 14:47:12 -- accel/accel.sh@20 -- # val=32 00:06:27.488 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.488 14:47:12 -- accel/accel.sh@20 -- # val=32 00:06:27.488 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.488 14:47:12 -- accel/accel.sh@20 -- # val=1 00:06:27.488 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.488 14:47:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.488 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.488 14:47:12 -- accel/accel.sh@20 -- # val=Yes 00:06:27.488 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.488 14:47:12 -- accel/accel.sh@20 -- # val= 00:06:27.488 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:27.488 14:47:12 -- accel/accel.sh@20 -- # val= 00:06:27.488 14:47:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # IFS=: 00:06:27.488 14:47:12 -- accel/accel.sh@19 -- # read -r var val 00:06:28.420 14:47:14 -- accel/accel.sh@20 -- # val= 00:06:28.420 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.420 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.420 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.420 14:47:14 -- accel/accel.sh@20 -- # val= 00:06:28.420 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.420 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.420 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.420 14:47:14 -- accel/accel.sh@20 -- # val= 00:06:28.420 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.420 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.420 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.420 14:47:14 -- accel/accel.sh@20 -- # val= 00:06:28.420 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.420 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.420 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.420 14:47:14 -- accel/accel.sh@20 -- # val= 00:06:28.421 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.421 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.421 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.421 14:47:14 -- accel/accel.sh@20 -- # val= 00:06:28.421 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.421 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.421 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.421 14:47:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.421 14:47:14 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:28.421 14:47:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.421 00:06:28.421 real 0m1.405s 00:06:28.421 user 0m1.267s 00:06:28.421 sys 0m0.141s 00:06:28.421 14:47:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:28.421 14:47:14 -- common/autotest_common.sh@10 -- # set +x 00:06:28.421 ************************************ 00:06:28.421 END TEST accel_copy_crc32c 00:06:28.421 ************************************ 00:06:28.679 14:47:14 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:28.679 14:47:14 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:28.679 14:47:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.679 14:47:14 -- common/autotest_common.sh@10 -- # set +x 00:06:28.679 ************************************ 00:06:28.679 START TEST accel_copy_crc32c_C2 00:06:28.679 ************************************ 00:06:28.679 14:47:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:28.679 14:47:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.679 14:47:14 -- accel/accel.sh@17 -- # local accel_module 00:06:28.679 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:47:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:28.679 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:47:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:28.679 14:47:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.679 14:47:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.679 14:47:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.679 14:47:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.679 14:47:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.679 14:47:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.679 14:47:14 -- accel/accel.sh@40 -- # local IFS=, 00:06:28.679 14:47:14 -- accel/accel.sh@41 -- # jq -r . 00:06:28.679 [2024-04-26 14:47:14.285170] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:28.679 [2024-04-26 14:47:14.285232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660588 ] 00:06:28.679 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.679 [2024-04-26 14:47:14.317121] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:28.679 [2024-04-26 14:47:14.349358] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.937 [2024-04-26 14:47:14.441431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.937 14:47:14 -- accel/accel.sh@20 -- # val= 00:06:28.937 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.937 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.937 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.937 14:47:14 -- accel/accel.sh@20 -- # val= 00:06:28.937 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.937 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.937 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.937 14:47:14 -- accel/accel.sh@20 -- # val=0x1 00:06:28.937 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.937 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.938 14:47:14 -- accel/accel.sh@20 -- # val= 00:06:28.938 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.938 14:47:14 -- accel/accel.sh@20 -- # val= 00:06:28.938 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.938 14:47:14 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:28.938 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.938 14:47:14 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.938 14:47:14 -- accel/accel.sh@20 -- # val=0 00:06:28.938 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.938 14:47:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.938 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.938 14:47:14 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:28.938 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.938 14:47:14 -- accel/accel.sh@20 -- # val= 00:06:28.938 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.938 14:47:14 -- accel/accel.sh@20 -- # val=software 00:06:28.938 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.938 14:47:14 -- accel/accel.sh@22 -- # accel_module=software 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.938 14:47:14 -- accel/accel.sh@20 -- # val=32 00:06:28.938 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.938 14:47:14 -- accel/accel.sh@20 -- # val=32 00:06:28.938 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.938 14:47:14 -- accel/accel.sh@20 -- # val=1 00:06:28.938 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.938 14:47:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.938 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.938 14:47:14 -- accel/accel.sh@20 -- # val=Yes 00:06:28.938 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.938 14:47:14 -- accel/accel.sh@20 -- # val= 00:06:28.938 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:28.938 14:47:14 -- accel/accel.sh@20 -- # val= 00:06:28.938 14:47:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # IFS=: 00:06:28.938 14:47:14 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:15 -- accel/accel.sh@20 -- # val= 00:06:30.313 14:47:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:15 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:15 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:15 -- accel/accel.sh@20 -- # val= 00:06:30.313 14:47:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:15 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:15 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:15 -- accel/accel.sh@20 -- # val= 00:06:30.313 14:47:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:15 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:15 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:15 -- accel/accel.sh@20 -- # val= 00:06:30.313 14:47:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:15 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:15 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:15 -- accel/accel.sh@20 -- # val= 00:06:30.313 14:47:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:15 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:15 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:15 -- accel/accel.sh@20 -- # val= 00:06:30.313 14:47:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:15 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:15 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:15 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.313 14:47:15 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:30.313 14:47:15 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.313 00:06:30.313 real 0m1.413s 00:06:30.313 user 0m1.268s 00:06:30.313 sys 0m0.148s 00:06:30.313 14:47:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:30.313 14:47:15 -- common/autotest_common.sh@10 -- # set +x 00:06:30.313 ************************************ 00:06:30.313 END TEST accel_copy_crc32c_C2 00:06:30.313 ************************************ 00:06:30.313 14:47:15 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:30.313 14:47:15 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:30.313 14:47:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.313 14:47:15 -- common/autotest_common.sh@10 -- # set +x 00:06:30.313 ************************************ 00:06:30.313 START TEST accel_dualcast 00:06:30.313 ************************************ 00:06:30.313 14:47:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:06:30.313 14:47:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.313 14:47:15 -- accel/accel.sh@17 -- # local accel_module 00:06:30.313 14:47:15 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:30.313 14:47:15 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:30.313 14:47:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.313 14:47:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.313 14:47:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.313 14:47:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.313 14:47:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.313 14:47:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.313 14:47:15 -- accel/accel.sh@40 -- # local IFS=, 00:06:30.313 14:47:15 -- accel/accel.sh@41 -- # jq -r . 00:06:30.313 [2024-04-26 14:47:15.815742] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:30.313 [2024-04-26 14:47:15.815806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660758 ] 00:06:30.313 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.313 [2024-04-26 14:47:15.849498] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:30.313 [2024-04-26 14:47:15.879812] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.313 [2024-04-26 14:47:15.971198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.313 14:47:16 -- accel/accel.sh@20 -- # val= 00:06:30.313 14:47:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:16 -- accel/accel.sh@20 -- # val= 00:06:30.313 14:47:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:16 -- accel/accel.sh@20 -- # val=0x1 00:06:30.313 14:47:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:16 -- accel/accel.sh@20 -- # val= 00:06:30.313 14:47:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:16 -- accel/accel.sh@20 -- # val= 00:06:30.313 14:47:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:16 -- accel/accel.sh@20 -- # val=dualcast 00:06:30.313 14:47:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:16 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.313 14:47:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:16 -- accel/accel.sh@20 -- # val= 00:06:30.313 14:47:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:16 -- accel/accel.sh@20 -- # val=software 00:06:30.313 14:47:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:16 -- accel/accel.sh@22 -- # accel_module=software 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:16 -- accel/accel.sh@20 -- # val=32 00:06:30.313 14:47:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:16 -- accel/accel.sh@20 -- # val=32 00:06:30.313 14:47:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:16 -- accel/accel.sh@20 -- # val=1 00:06:30.313 14:47:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.313 14:47:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:16 -- accel/accel.sh@20 -- # val=Yes 00:06:30.313 14:47:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:16 -- accel/accel.sh@20 -- # val= 00:06:30.313 14:47:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # read -r var val 00:06:30.313 14:47:16 -- accel/accel.sh@20 -- # val= 00:06:30.313 14:47:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # IFS=: 00:06:30.313 14:47:16 -- accel/accel.sh@19 -- # read -r var val 00:06:31.687 14:47:17 -- accel/accel.sh@20 -- # val= 00:06:31.687 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.687 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.687 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.687 14:47:17 -- accel/accel.sh@20 -- # val= 00:06:31.687 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.687 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.687 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.687 14:47:17 -- accel/accel.sh@20 -- # val= 00:06:31.687 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.687 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.687 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.687 14:47:17 -- accel/accel.sh@20 -- # val= 00:06:31.687 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.687 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.687 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.687 14:47:17 -- accel/accel.sh@20 -- # val= 00:06:31.687 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.687 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.687 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.687 14:47:17 -- accel/accel.sh@20 -- # val= 00:06:31.687 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.687 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.687 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.687 14:47:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.687 14:47:17 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:31.687 14:47:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.687 00:06:31.687 real 0m1.407s 00:06:31.687 user 0m1.261s 00:06:31.687 sys 0m0.146s 00:06:31.687 14:47:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.687 14:47:17 -- common/autotest_common.sh@10 -- # set +x 00:06:31.687 ************************************ 00:06:31.687 END TEST accel_dualcast 00:06:31.687 ************************************ 00:06:31.687 14:47:17 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:31.687 14:47:17 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:31.687 14:47:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.687 14:47:17 -- common/autotest_common.sh@10 -- # set +x 00:06:31.687 ************************************ 00:06:31.687 START TEST accel_compare 00:06:31.687 ************************************ 00:06:31.687 14:47:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:06:31.687 14:47:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.687 14:47:17 -- accel/accel.sh@17 -- # local accel_module 00:06:31.687 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.687 14:47:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:31.687 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.687 14:47:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:31.687 14:47:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.687 14:47:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.687 14:47:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.687 14:47:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.687 14:47:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.687 14:47:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.687 14:47:17 -- accel/accel.sh@40 -- # local IFS=, 00:06:31.687 14:47:17 -- accel/accel.sh@41 -- # jq -r . 00:06:31.687 [2024-04-26 14:47:17.340250] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:31.687 [2024-04-26 14:47:17.340324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660919 ] 00:06:31.687 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.687 [2024-04-26 14:47:17.374197] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:31.687 [2024-04-26 14:47:17.404807] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.945 [2024-04-26 14:47:17.497087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.945 14:47:17 -- accel/accel.sh@20 -- # val= 00:06:31.945 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.945 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.945 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.945 14:47:17 -- accel/accel.sh@20 -- # val= 00:06:31.945 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.945 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.945 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.945 14:47:17 -- accel/accel.sh@20 -- # val=0x1 00:06:31.945 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.945 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.945 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.945 14:47:17 -- accel/accel.sh@20 -- # val= 00:06:31.945 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.945 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.945 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.945 14:47:17 -- accel/accel.sh@20 -- # val= 00:06:31.945 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.945 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.945 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.945 14:47:17 -- accel/accel.sh@20 -- # val=compare 00:06:31.945 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.945 14:47:17 -- accel/accel.sh@23 -- # accel_opc=compare 00:06:31.945 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.945 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.945 14:47:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.945 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.945 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.945 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.945 14:47:17 -- accel/accel.sh@20 -- # val= 00:06:31.945 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.945 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.946 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.946 14:47:17 -- accel/accel.sh@20 -- # val=software 00:06:31.946 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.946 14:47:17 -- accel/accel.sh@22 -- # accel_module=software 00:06:31.946 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.946 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.946 14:47:17 -- accel/accel.sh@20 -- # val=32 00:06:31.946 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.946 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.946 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.946 14:47:17 -- accel/accel.sh@20 -- # val=32 00:06:31.946 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.946 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.946 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.946 14:47:17 -- accel/accel.sh@20 -- # val=1 00:06:31.946 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.946 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.946 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.946 14:47:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.946 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.946 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.946 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.946 14:47:17 -- accel/accel.sh@20 -- # val=Yes 00:06:31.946 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.946 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.946 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.946 14:47:17 -- accel/accel.sh@20 -- # val= 00:06:31.946 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.946 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.946 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:31.946 14:47:17 -- accel/accel.sh@20 -- # val= 00:06:31.946 14:47:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.946 14:47:17 -- accel/accel.sh@19 -- # IFS=: 00:06:31.946 14:47:17 -- accel/accel.sh@19 -- # read -r var val 00:06:33.319 14:47:18 -- accel/accel.sh@20 -- # val= 00:06:33.319 14:47:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.319 14:47:18 -- accel/accel.sh@19 -- # IFS=: 00:06:33.319 14:47:18 -- accel/accel.sh@19 -- # read -r var val 00:06:33.319 14:47:18 -- accel/accel.sh@20 -- # val= 00:06:33.319 14:47:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.319 14:47:18 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 14:47:18 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 14:47:18 -- accel/accel.sh@20 -- # val= 00:06:33.320 14:47:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 14:47:18 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 14:47:18 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 14:47:18 -- accel/accel.sh@20 -- # val= 00:06:33.320 14:47:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 14:47:18 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 14:47:18 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 14:47:18 -- accel/accel.sh@20 -- # val= 00:06:33.320 14:47:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 14:47:18 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 14:47:18 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 14:47:18 -- accel/accel.sh@20 -- # val= 00:06:33.320 14:47:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 14:47:18 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 14:47:18 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 14:47:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.320 14:47:18 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:33.320 14:47:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.320 00:06:33.320 real 0m1.407s 00:06:33.320 user 0m1.263s 00:06:33.320 sys 0m0.144s 00:06:33.320 14:47:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.320 14:47:18 -- common/autotest_common.sh@10 -- # set +x 00:06:33.320 ************************************ 00:06:33.320 END TEST accel_compare 00:06:33.320 ************************************ 00:06:33.320 14:47:18 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:33.320 14:47:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:33.320 14:47:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.320 14:47:18 -- common/autotest_common.sh@10 -- # set +x 00:06:33.320 ************************************ 00:06:33.320 START TEST accel_xor 00:06:33.320 ************************************ 00:06:33.320 14:47:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:06:33.320 14:47:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.320 14:47:18 -- accel/accel.sh@17 -- # local accel_module 00:06:33.320 14:47:18 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 14:47:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:33.320 14:47:18 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 14:47:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:33.320 14:47:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.320 14:47:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.320 14:47:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.320 14:47:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.320 14:47:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.320 14:47:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.320 14:47:18 -- accel/accel.sh@40 -- # local IFS=, 00:06:33.320 14:47:18 -- accel/accel.sh@41 -- # jq -r . 00:06:33.320 [2024-04-26 14:47:18.868210] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:33.320 [2024-04-26 14:47:18.868266] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3661200 ] 00:06:33.320 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.320 [2024-04-26 14:47:18.901147] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:33.320 [2024-04-26 14:47:18.931416] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.320 [2024-04-26 14:47:19.023377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.578 14:47:19 -- accel/accel.sh@20 -- # val= 00:06:33.578 14:47:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # IFS=: 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # read -r var val 00:06:33.578 14:47:19 -- accel/accel.sh@20 -- # val= 00:06:33.578 14:47:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # IFS=: 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # read -r var val 00:06:33.578 14:47:19 -- accel/accel.sh@20 -- # val=0x1 00:06:33.578 14:47:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # IFS=: 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # read -r var val 00:06:33.578 14:47:19 -- accel/accel.sh@20 -- # val= 00:06:33.578 14:47:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # IFS=: 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # read -r var val 00:06:33.578 14:47:19 -- accel/accel.sh@20 -- # val= 00:06:33.578 14:47:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # IFS=: 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # read -r var val 00:06:33.578 14:47:19 -- accel/accel.sh@20 -- # val=xor 00:06:33.578 14:47:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.578 14:47:19 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # IFS=: 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # read -r var val 00:06:33.578 14:47:19 -- accel/accel.sh@20 -- # val=2 00:06:33.578 14:47:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # IFS=: 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # read -r var val 00:06:33.578 14:47:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.578 14:47:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # IFS=: 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # read -r var val 00:06:33.578 14:47:19 -- accel/accel.sh@20 -- # val= 00:06:33.578 14:47:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # IFS=: 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # read -r var val 00:06:33.578 14:47:19 -- accel/accel.sh@20 -- # val=software 00:06:33.578 14:47:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.578 14:47:19 -- accel/accel.sh@22 -- # accel_module=software 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # IFS=: 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # read -r var val 00:06:33.578 14:47:19 -- accel/accel.sh@20 -- # val=32 00:06:33.578 14:47:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # IFS=: 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # read -r var val 00:06:33.578 14:47:19 -- accel/accel.sh@20 -- # val=32 00:06:33.578 14:47:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # IFS=: 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # read -r var val 00:06:33.578 14:47:19 -- accel/accel.sh@20 -- # val=1 00:06:33.578 14:47:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # IFS=: 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # read -r var val 00:06:33.578 14:47:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.578 14:47:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # IFS=: 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # read -r var val 00:06:33.578 14:47:19 -- accel/accel.sh@20 -- # val=Yes 00:06:33.578 14:47:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # IFS=: 00:06:33.578 14:47:19 -- accel/accel.sh@19 -- # read -r var val 00:06:33.579 14:47:19 -- accel/accel.sh@20 -- # val= 00:06:33.579 14:47:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.579 14:47:19 -- accel/accel.sh@19 -- # IFS=: 00:06:33.579 14:47:19 -- accel/accel.sh@19 -- # read -r var val 00:06:33.579 14:47:19 -- accel/accel.sh@20 -- # val= 00:06:33.579 14:47:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.579 14:47:19 -- accel/accel.sh@19 -- # IFS=: 00:06:33.579 14:47:19 -- accel/accel.sh@19 -- # read -r var val 00:06:34.951 14:47:20 -- accel/accel.sh@20 -- # val= 00:06:34.951 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.951 14:47:20 -- accel/accel.sh@20 -- # val= 00:06:34.951 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.951 14:47:20 -- accel/accel.sh@20 -- # val= 00:06:34.951 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.951 14:47:20 -- accel/accel.sh@20 -- # val= 00:06:34.951 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.951 14:47:20 -- accel/accel.sh@20 -- # val= 00:06:34.951 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.951 14:47:20 -- accel/accel.sh@20 -- # val= 00:06:34.951 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.951 14:47:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.951 14:47:20 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:34.951 14:47:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.951 00:06:34.951 real 0m1.410s 00:06:34.951 user 0m1.264s 00:06:34.951 sys 0m0.146s 00:06:34.951 14:47:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.951 14:47:20 -- common/autotest_common.sh@10 -- # set +x 00:06:34.951 ************************************ 00:06:34.951 END TEST accel_xor 00:06:34.951 ************************************ 00:06:34.951 14:47:20 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:34.951 14:47:20 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:34.951 14:47:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.951 14:47:20 -- common/autotest_common.sh@10 -- # set +x 00:06:34.951 ************************************ 00:06:34.951 START TEST accel_xor 00:06:34.951 ************************************ 00:06:34.951 14:47:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:06:34.951 14:47:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.951 14:47:20 -- accel/accel.sh@17 -- # local accel_module 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.951 14:47:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.951 14:47:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:34.951 14:47:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.951 14:47:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.951 14:47:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.951 14:47:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.951 14:47:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.951 14:47:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.951 14:47:20 -- accel/accel.sh@40 -- # local IFS=, 00:06:34.951 14:47:20 -- accel/accel.sh@41 -- # jq -r . 00:06:34.951 [2024-04-26 14:47:20.396250] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:34.951 [2024-04-26 14:47:20.396324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3661362 ] 00:06:34.951 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.951 [2024-04-26 14:47:20.429592] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:34.951 [2024-04-26 14:47:20.459494] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.951 [2024-04-26 14:47:20.551578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.951 14:47:20 -- accel/accel.sh@20 -- # val= 00:06:34.951 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.951 14:47:20 -- accel/accel.sh@20 -- # val= 00:06:34.951 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.951 14:47:20 -- accel/accel.sh@20 -- # val=0x1 00:06:34.951 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.951 14:47:20 -- accel/accel.sh@20 -- # val= 00:06:34.951 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.951 14:47:20 -- accel/accel.sh@20 -- # val= 00:06:34.951 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.951 14:47:20 -- accel/accel.sh@20 -- # val=xor 00:06:34.951 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.951 14:47:20 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.951 14:47:20 -- accel/accel.sh@20 -- # val=3 00:06:34.951 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.951 14:47:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.951 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.951 14:47:20 -- accel/accel.sh@20 -- # val= 00:06:34.951 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.951 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.951 14:47:20 -- accel/accel.sh@20 -- # val=software 00:06:34.951 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.952 14:47:20 -- accel/accel.sh@22 -- # accel_module=software 00:06:34.952 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.952 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.952 14:47:20 -- accel/accel.sh@20 -- # val=32 00:06:34.952 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.952 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.952 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.952 14:47:20 -- accel/accel.sh@20 -- # val=32 00:06:34.952 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.952 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.952 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.952 14:47:20 -- accel/accel.sh@20 -- # val=1 00:06:34.952 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.952 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.952 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.952 14:47:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.952 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.952 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.952 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.952 14:47:20 -- accel/accel.sh@20 -- # val=Yes 00:06:34.952 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.952 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.952 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.952 14:47:20 -- accel/accel.sh@20 -- # val= 00:06:34.952 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.952 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.952 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:34.952 14:47:20 -- accel/accel.sh@20 -- # val= 00:06:34.952 14:47:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.952 14:47:20 -- accel/accel.sh@19 -- # IFS=: 00:06:34.952 14:47:20 -- accel/accel.sh@19 -- # read -r var val 00:06:36.322 14:47:21 -- accel/accel.sh@20 -- # val= 00:06:36.322 14:47:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.322 14:47:21 -- accel/accel.sh@19 -- # IFS=: 00:06:36.322 14:47:21 -- accel/accel.sh@19 -- # read -r var val 00:06:36.322 14:47:21 -- accel/accel.sh@20 -- # val= 00:06:36.322 14:47:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.322 14:47:21 -- accel/accel.sh@19 -- # IFS=: 00:06:36.322 14:47:21 -- accel/accel.sh@19 -- # read -r var val 00:06:36.322 14:47:21 -- accel/accel.sh@20 -- # val= 00:06:36.322 14:47:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.322 14:47:21 -- accel/accel.sh@19 -- # IFS=: 00:06:36.322 14:47:21 -- accel/accel.sh@19 -- # read -r var val 00:06:36.322 14:47:21 -- accel/accel.sh@20 -- # val= 00:06:36.322 14:47:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.322 14:47:21 -- accel/accel.sh@19 -- # IFS=: 00:06:36.322 14:47:21 -- accel/accel.sh@19 -- # read -r var val 00:06:36.323 14:47:21 -- accel/accel.sh@20 -- # val= 00:06:36.323 14:47:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.323 14:47:21 -- accel/accel.sh@19 -- # IFS=: 00:06:36.323 14:47:21 -- accel/accel.sh@19 -- # read -r var val 00:06:36.323 14:47:21 -- accel/accel.sh@20 -- # val= 00:06:36.323 14:47:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.323 14:47:21 -- accel/accel.sh@19 -- # IFS=: 00:06:36.323 14:47:21 -- accel/accel.sh@19 -- # read -r var val 00:06:36.323 14:47:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.323 14:47:21 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:36.323 14:47:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.323 00:06:36.323 real 0m1.412s 00:06:36.323 user 0m1.268s 00:06:36.323 sys 0m0.144s 00:06:36.323 14:47:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.323 14:47:21 -- common/autotest_common.sh@10 -- # set +x 00:06:36.323 ************************************ 00:06:36.323 END TEST accel_xor 00:06:36.323 ************************************ 00:06:36.323 14:47:21 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:36.323 14:47:21 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:36.323 14:47:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.323 14:47:21 -- common/autotest_common.sh@10 -- # set +x 00:06:36.323 ************************************ 00:06:36.323 START TEST accel_dif_verify 00:06:36.323 ************************************ 00:06:36.323 14:47:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:06:36.323 14:47:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.323 14:47:21 -- accel/accel.sh@17 -- # local accel_module 00:06:36.323 14:47:21 -- accel/accel.sh@19 -- # IFS=: 00:06:36.323 14:47:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:36.323 14:47:21 -- accel/accel.sh@19 -- # read -r var val 00:06:36.323 14:47:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:36.323 14:47:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.323 14:47:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.323 14:47:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.323 14:47:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.323 14:47:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.323 14:47:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.323 14:47:21 -- accel/accel.sh@40 -- # local IFS=, 00:06:36.323 14:47:21 -- accel/accel.sh@41 -- # jq -r . 00:06:36.323 [2024-04-26 14:47:21.924277] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:36.323 [2024-04-26 14:47:21.924356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3661532 ] 00:06:36.323 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.323 [2024-04-26 14:47:21.957207] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:36.323 [2024-04-26 14:47:21.987292] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.581 [2024-04-26 14:47:22.079916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.581 14:47:22 -- accel/accel.sh@20 -- # val= 00:06:36.581 14:47:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # IFS=: 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # read -r var val 00:06:36.581 14:47:22 -- accel/accel.sh@20 -- # val= 00:06:36.581 14:47:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # IFS=: 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # read -r var val 00:06:36.581 14:47:22 -- accel/accel.sh@20 -- # val=0x1 00:06:36.581 14:47:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # IFS=: 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # read -r var val 00:06:36.581 14:47:22 -- accel/accel.sh@20 -- # val= 00:06:36.581 14:47:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # IFS=: 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # read -r var val 00:06:36.581 14:47:22 -- accel/accel.sh@20 -- # val= 00:06:36.581 14:47:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # IFS=: 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # read -r var val 00:06:36.581 14:47:22 -- accel/accel.sh@20 -- # val=dif_verify 00:06:36.581 14:47:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.581 14:47:22 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # IFS=: 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # read -r var val 00:06:36.581 14:47:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.581 14:47:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # IFS=: 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # read -r var val 00:06:36.581 14:47:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.581 14:47:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # IFS=: 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # read -r var val 00:06:36.581 14:47:22 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:36.581 14:47:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # IFS=: 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # read -r var val 00:06:36.581 14:47:22 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:36.581 14:47:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # IFS=: 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # read -r var val 00:06:36.581 14:47:22 -- accel/accel.sh@20 -- # val= 00:06:36.581 14:47:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # IFS=: 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # read -r var val 00:06:36.581 14:47:22 -- accel/accel.sh@20 -- # val=software 00:06:36.581 14:47:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.581 14:47:22 -- accel/accel.sh@22 -- # accel_module=software 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # IFS=: 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # read -r var val 00:06:36.581 14:47:22 -- accel/accel.sh@20 -- # val=32 00:06:36.581 14:47:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # IFS=: 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # read -r var val 00:06:36.581 14:47:22 -- accel/accel.sh@20 -- # val=32 00:06:36.581 14:47:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # IFS=: 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # read -r var val 00:06:36.581 14:47:22 -- accel/accel.sh@20 -- # val=1 00:06:36.581 14:47:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # IFS=: 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # read -r var val 00:06:36.581 14:47:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.581 14:47:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # IFS=: 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # read -r var val 00:06:36.581 14:47:22 -- accel/accel.sh@20 -- # val=No 00:06:36.581 14:47:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # IFS=: 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # read -r var val 00:06:36.581 14:47:22 -- accel/accel.sh@20 -- # val= 00:06:36.581 14:47:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # IFS=: 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # read -r var val 00:06:36.581 14:47:22 -- accel/accel.sh@20 -- # val= 00:06:36.581 14:47:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # IFS=: 00:06:36.581 14:47:22 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val= 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val= 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val= 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val= 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val= 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val= 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.951 14:47:23 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:37.951 14:47:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.951 00:06:37.951 real 0m1.398s 00:06:37.951 user 0m1.269s 00:06:37.951 sys 0m0.131s 00:06:37.951 14:47:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.951 14:47:23 -- common/autotest_common.sh@10 -- # set +x 00:06:37.951 ************************************ 00:06:37.951 END TEST accel_dif_verify 00:06:37.951 ************************************ 00:06:37.951 14:47:23 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:37.951 14:47:23 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:37.951 14:47:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.951 14:47:23 -- common/autotest_common.sh@10 -- # set +x 00:06:37.951 ************************************ 00:06:37.951 START TEST accel_dif_generate 00:06:37.951 ************************************ 00:06:37.951 14:47:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:06:37.951 14:47:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.951 14:47:23 -- accel/accel.sh@17 -- # local accel_module 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:37.951 14:47:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.951 14:47:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.951 14:47:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.951 14:47:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.951 14:47:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.951 14:47:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.951 14:47:23 -- accel/accel.sh@40 -- # local IFS=, 00:06:37.951 14:47:23 -- accel/accel.sh@41 -- # jq -r . 00:06:37.951 [2024-04-26 14:47:23.445140] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:37.951 [2024-04-26 14:47:23.445214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3661807 ] 00:06:37.951 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.951 [2024-04-26 14:47:23.478252] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:37.951 [2024-04-26 14:47:23.508537] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.951 [2024-04-26 14:47:23.600661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val= 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val= 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val=0x1 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val= 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val= 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val=dif_generate 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val= 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val=software 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@22 -- # accel_module=software 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val=32 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val=32 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val=1 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val=No 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val= 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:37.951 14:47:23 -- accel/accel.sh@20 -- # val= 00:06:37.951 14:47:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # IFS=: 00:06:37.951 14:47:23 -- accel/accel.sh@19 -- # read -r var val 00:06:39.322 14:47:24 -- accel/accel.sh@20 -- # val= 00:06:39.322 14:47:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.322 14:47:24 -- accel/accel.sh@19 -- # IFS=: 00:06:39.322 14:47:24 -- accel/accel.sh@19 -- # read -r var val 00:06:39.322 14:47:24 -- accel/accel.sh@20 -- # val= 00:06:39.322 14:47:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.322 14:47:24 -- accel/accel.sh@19 -- # IFS=: 00:06:39.322 14:47:24 -- accel/accel.sh@19 -- # read -r var val 00:06:39.322 14:47:24 -- accel/accel.sh@20 -- # val= 00:06:39.322 14:47:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.322 14:47:24 -- accel/accel.sh@19 -- # IFS=: 00:06:39.322 14:47:24 -- accel/accel.sh@19 -- # read -r var val 00:06:39.322 14:47:24 -- accel/accel.sh@20 -- # val= 00:06:39.322 14:47:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.322 14:47:24 -- accel/accel.sh@19 -- # IFS=: 00:06:39.322 14:47:24 -- accel/accel.sh@19 -- # read -r var val 00:06:39.322 14:47:24 -- accel/accel.sh@20 -- # val= 00:06:39.322 14:47:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.322 14:47:24 -- accel/accel.sh@19 -- # IFS=: 00:06:39.322 14:47:24 -- accel/accel.sh@19 -- # read -r var val 00:06:39.322 14:47:24 -- accel/accel.sh@20 -- # val= 00:06:39.322 14:47:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.322 14:47:24 -- accel/accel.sh@19 -- # IFS=: 00:06:39.322 14:47:24 -- accel/accel.sh@19 -- # read -r var val 00:06:39.322 14:47:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.322 14:47:24 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:39.322 14:47:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.322 00:06:39.322 real 0m1.410s 00:06:39.322 user 0m1.266s 00:06:39.322 sys 0m0.146s 00:06:39.322 14:47:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:39.322 14:47:24 -- common/autotest_common.sh@10 -- # set +x 00:06:39.322 ************************************ 00:06:39.322 END TEST accel_dif_generate 00:06:39.322 ************************************ 00:06:39.322 14:47:24 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:39.322 14:47:24 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:39.322 14:47:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.322 14:47:24 -- common/autotest_common.sh@10 -- # set +x 00:06:39.322 ************************************ 00:06:39.322 START TEST accel_dif_generate_copy 00:06:39.322 ************************************ 00:06:39.322 14:47:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:06:39.322 14:47:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.322 14:47:24 -- accel/accel.sh@17 -- # local accel_module 00:06:39.322 14:47:24 -- accel/accel.sh@19 -- # IFS=: 00:06:39.322 14:47:24 -- accel/accel.sh@19 -- # read -r var val 00:06:39.322 14:47:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:39.322 14:47:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:39.322 14:47:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.322 14:47:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.322 14:47:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.322 14:47:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.322 14:47:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.322 14:47:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.323 14:47:24 -- accel/accel.sh@40 -- # local IFS=, 00:06:39.323 14:47:24 -- accel/accel.sh@41 -- # jq -r . 00:06:39.323 [2024-04-26 14:47:24.971355] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:39.323 [2024-04-26 14:47:24.971415] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3661976 ] 00:06:39.323 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.323 [2024-04-26 14:47:25.002956] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:39.323 [2024-04-26 14:47:25.033076] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.581 [2024-04-26 14:47:25.125254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.581 14:47:25 -- accel/accel.sh@20 -- # val= 00:06:39.581 14:47:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # IFS=: 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # read -r var val 00:06:39.581 14:47:25 -- accel/accel.sh@20 -- # val= 00:06:39.581 14:47:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # IFS=: 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # read -r var val 00:06:39.581 14:47:25 -- accel/accel.sh@20 -- # val=0x1 00:06:39.581 14:47:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # IFS=: 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # read -r var val 00:06:39.581 14:47:25 -- accel/accel.sh@20 -- # val= 00:06:39.581 14:47:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # IFS=: 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # read -r var val 00:06:39.581 14:47:25 -- accel/accel.sh@20 -- # val= 00:06:39.581 14:47:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # IFS=: 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # read -r var val 00:06:39.581 14:47:25 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:39.581 14:47:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.581 14:47:25 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # IFS=: 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # read -r var val 00:06:39.581 14:47:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.581 14:47:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # IFS=: 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # read -r var val 00:06:39.581 14:47:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.581 14:47:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # IFS=: 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # read -r var val 00:06:39.581 14:47:25 -- accel/accel.sh@20 -- # val= 00:06:39.581 14:47:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # IFS=: 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # read -r var val 00:06:39.581 14:47:25 -- accel/accel.sh@20 -- # val=software 00:06:39.581 14:47:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.581 14:47:25 -- accel/accel.sh@22 -- # accel_module=software 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # IFS=: 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # read -r var val 00:06:39.581 14:47:25 -- accel/accel.sh@20 -- # val=32 00:06:39.581 14:47:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # IFS=: 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # read -r var val 00:06:39.581 14:47:25 -- accel/accel.sh@20 -- # val=32 00:06:39.581 14:47:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # IFS=: 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # read -r var val 00:06:39.581 14:47:25 -- accel/accel.sh@20 -- # val=1 00:06:39.581 14:47:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # IFS=: 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # read -r var val 00:06:39.581 14:47:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.581 14:47:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # IFS=: 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # read -r var val 00:06:39.581 14:47:25 -- accel/accel.sh@20 -- # val=No 00:06:39.581 14:47:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # IFS=: 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # read -r var val 00:06:39.581 14:47:25 -- accel/accel.sh@20 -- # val= 00:06:39.581 14:47:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # IFS=: 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # read -r var val 00:06:39.581 14:47:25 -- accel/accel.sh@20 -- # val= 00:06:39.581 14:47:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # IFS=: 00:06:39.581 14:47:25 -- accel/accel.sh@19 -- # read -r var val 00:06:40.952 14:47:26 -- accel/accel.sh@20 -- # val= 00:06:40.952 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.952 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:40.952 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:40.952 14:47:26 -- accel/accel.sh@20 -- # val= 00:06:40.952 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.952 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:40.952 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:40.952 14:47:26 -- accel/accel.sh@20 -- # val= 00:06:40.952 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.952 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:40.953 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:40.953 14:47:26 -- accel/accel.sh@20 -- # val= 00:06:40.953 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.953 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:40.953 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:40.953 14:47:26 -- accel/accel.sh@20 -- # val= 00:06:40.953 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.953 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:40.953 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:40.953 14:47:26 -- accel/accel.sh@20 -- # val= 00:06:40.953 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.953 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:40.953 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:40.953 14:47:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.953 14:47:26 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:40.953 14:47:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.953 00:06:40.953 real 0m1.412s 00:06:40.953 user 0m1.271s 00:06:40.953 sys 0m0.141s 00:06:40.953 14:47:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:40.953 14:47:26 -- common/autotest_common.sh@10 -- # set +x 00:06:40.953 ************************************ 00:06:40.953 END TEST accel_dif_generate_copy 00:06:40.953 ************************************ 00:06:40.953 14:47:26 -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:40.953 14:47:26 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.953 14:47:26 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:40.953 14:47:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.953 14:47:26 -- common/autotest_common.sh@10 -- # set +x 00:06:40.953 ************************************ 00:06:40.953 START TEST accel_comp 00:06:40.953 ************************************ 00:06:40.953 14:47:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.953 14:47:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.953 14:47:26 -- accel/accel.sh@17 -- # local accel_module 00:06:40.953 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:40.953 14:47:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.953 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:40.953 14:47:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.953 14:47:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.953 14:47:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.953 14:47:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.953 14:47:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.953 14:47:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.953 14:47:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.953 14:47:26 -- accel/accel.sh@40 -- # local IFS=, 00:06:40.953 14:47:26 -- accel/accel.sh@41 -- # jq -r . 00:06:40.953 [2024-04-26 14:47:26.499959] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:40.953 [2024-04-26 14:47:26.500037] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3662138 ] 00:06:40.953 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.953 [2024-04-26 14:47:26.533799] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:40.953 [2024-04-26 14:47:26.564553] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.953 [2024-04-26 14:47:26.656668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.211 14:47:26 -- accel/accel.sh@20 -- # val= 00:06:41.211 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.211 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:41.211 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:41.211 14:47:26 -- accel/accel.sh@20 -- # val= 00:06:41.211 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.211 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:41.211 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:41.211 14:47:26 -- accel/accel.sh@20 -- # val= 00:06:41.211 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.211 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:41.211 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:41.211 14:47:26 -- accel/accel.sh@20 -- # val=0x1 00:06:41.211 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.211 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:41.211 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:41.211 14:47:26 -- accel/accel.sh@20 -- # val= 00:06:41.211 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.211 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:41.211 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:41.211 14:47:26 -- accel/accel.sh@20 -- # val= 00:06:41.212 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:41.212 14:47:26 -- accel/accel.sh@20 -- # val=compress 00:06:41.212 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.212 14:47:26 -- accel/accel.sh@23 -- # accel_opc=compress 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:41.212 14:47:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.212 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:41.212 14:47:26 -- accel/accel.sh@20 -- # val= 00:06:41.212 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:41.212 14:47:26 -- accel/accel.sh@20 -- # val=software 00:06:41.212 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.212 14:47:26 -- accel/accel.sh@22 -- # accel_module=software 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:41.212 14:47:26 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:41.212 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:41.212 14:47:26 -- accel/accel.sh@20 -- # val=32 00:06:41.212 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:41.212 14:47:26 -- accel/accel.sh@20 -- # val=32 00:06:41.212 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:41.212 14:47:26 -- accel/accel.sh@20 -- # val=1 00:06:41.212 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:41.212 14:47:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.212 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:41.212 14:47:26 -- accel/accel.sh@20 -- # val=No 00:06:41.212 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:41.212 14:47:26 -- accel/accel.sh@20 -- # val= 00:06:41.212 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:41.212 14:47:26 -- accel/accel.sh@20 -- # val= 00:06:41.212 14:47:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # IFS=: 00:06:41.212 14:47:26 -- accel/accel.sh@19 -- # read -r var val 00:06:42.145 14:47:27 -- accel/accel.sh@20 -- # val= 00:06:42.145 14:47:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.145 14:47:27 -- accel/accel.sh@19 -- # IFS=: 00:06:42.145 14:47:27 -- accel/accel.sh@19 -- # read -r var val 00:06:42.145 14:47:27 -- accel/accel.sh@20 -- # val= 00:06:42.145 14:47:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.145 14:47:27 -- accel/accel.sh@19 -- # IFS=: 00:06:42.145 14:47:27 -- accel/accel.sh@19 -- # read -r var val 00:06:42.145 14:47:27 -- accel/accel.sh@20 -- # val= 00:06:42.145 14:47:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.145 14:47:27 -- accel/accel.sh@19 -- # IFS=: 00:06:42.145 14:47:27 -- accel/accel.sh@19 -- # read -r var val 00:06:42.145 14:47:27 -- accel/accel.sh@20 -- # val= 00:06:42.145 14:47:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.145 14:47:27 -- accel/accel.sh@19 -- # IFS=: 00:06:42.145 14:47:27 -- accel/accel.sh@19 -- # read -r var val 00:06:42.145 14:47:27 -- accel/accel.sh@20 -- # val= 00:06:42.145 14:47:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.145 14:47:27 -- accel/accel.sh@19 -- # IFS=: 00:06:42.145 14:47:27 -- accel/accel.sh@19 -- # read -r var val 00:06:42.145 14:47:27 -- accel/accel.sh@20 -- # val= 00:06:42.439 14:47:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.439 14:47:27 -- accel/accel.sh@19 -- # IFS=: 00:06:42.439 14:47:27 -- accel/accel.sh@19 -- # read -r var val 00:06:42.439 14:47:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.439 14:47:27 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:42.439 14:47:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.439 00:06:42.439 real 0m1.402s 00:06:42.439 user 0m1.250s 00:06:42.439 sys 0m0.154s 00:06:42.439 14:47:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:42.439 14:47:27 -- common/autotest_common.sh@10 -- # set +x 00:06:42.439 ************************************ 00:06:42.439 END TEST accel_comp 00:06:42.439 ************************************ 00:06:42.439 14:47:27 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.439 14:47:27 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:42.439 14:47:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.439 14:47:27 -- common/autotest_common.sh@10 -- # set +x 00:06:42.439 ************************************ 00:06:42.439 START TEST accel_decomp 00:06:42.439 ************************************ 00:06:42.439 14:47:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.439 14:47:28 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.439 14:47:28 -- accel/accel.sh@17 -- # local accel_module 00:06:42.439 14:47:28 -- accel/accel.sh@19 -- # IFS=: 00:06:42.439 14:47:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.439 14:47:28 -- accel/accel.sh@19 -- # read -r var val 00:06:42.439 14:47:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.439 14:47:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.439 14:47:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.439 14:47:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.439 14:47:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.439 14:47:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.439 14:47:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.439 14:47:28 -- accel/accel.sh@40 -- # local IFS=, 00:06:42.439 14:47:28 -- accel/accel.sh@41 -- # jq -r . 00:06:42.439 [2024-04-26 14:47:28.023206] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:42.439 [2024-04-26 14:47:28.023263] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3662420 ] 00:06:42.439 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.439 [2024-04-26 14:47:28.056348] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:42.439 [2024-04-26 14:47:28.086673] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.766 [2024-04-26 14:47:28.180243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.766 14:47:28 -- accel/accel.sh@20 -- # val= 00:06:42.766 14:47:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # IFS=: 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # read -r var val 00:06:42.766 14:47:28 -- accel/accel.sh@20 -- # val= 00:06:42.766 14:47:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # IFS=: 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # read -r var val 00:06:42.766 14:47:28 -- accel/accel.sh@20 -- # val= 00:06:42.766 14:47:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # IFS=: 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # read -r var val 00:06:42.766 14:47:28 -- accel/accel.sh@20 -- # val=0x1 00:06:42.766 14:47:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # IFS=: 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # read -r var val 00:06:42.766 14:47:28 -- accel/accel.sh@20 -- # val= 00:06:42.766 14:47:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # IFS=: 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # read -r var val 00:06:42.766 14:47:28 -- accel/accel.sh@20 -- # val= 00:06:42.766 14:47:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # IFS=: 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # read -r var val 00:06:42.766 14:47:28 -- accel/accel.sh@20 -- # val=decompress 00:06:42.766 14:47:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.766 14:47:28 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # IFS=: 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # read -r var val 00:06:42.766 14:47:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.766 14:47:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # IFS=: 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # read -r var val 00:06:42.766 14:47:28 -- accel/accel.sh@20 -- # val= 00:06:42.766 14:47:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # IFS=: 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # read -r var val 00:06:42.766 14:47:28 -- accel/accel.sh@20 -- # val=software 00:06:42.766 14:47:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.766 14:47:28 -- accel/accel.sh@22 -- # accel_module=software 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # IFS=: 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # read -r var val 00:06:42.766 14:47:28 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:42.766 14:47:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # IFS=: 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # read -r var val 00:06:42.766 14:47:28 -- accel/accel.sh@20 -- # val=32 00:06:42.766 14:47:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # IFS=: 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # read -r var val 00:06:42.766 14:47:28 -- accel/accel.sh@20 -- # val=32 00:06:42.766 14:47:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # IFS=: 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # read -r var val 00:06:42.766 14:47:28 -- accel/accel.sh@20 -- # val=1 00:06:42.766 14:47:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # IFS=: 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # read -r var val 00:06:42.766 14:47:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.766 14:47:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # IFS=: 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # read -r var val 00:06:42.766 14:47:28 -- accel/accel.sh@20 -- # val=Yes 00:06:42.766 14:47:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # IFS=: 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # read -r var val 00:06:42.766 14:47:28 -- accel/accel.sh@20 -- # val= 00:06:42.766 14:47:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # IFS=: 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # read -r var val 00:06:42.766 14:47:28 -- accel/accel.sh@20 -- # val= 00:06:42.766 14:47:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # IFS=: 00:06:42.766 14:47:28 -- accel/accel.sh@19 -- # read -r var val 00:06:43.698 14:47:29 -- accel/accel.sh@20 -- # val= 00:06:43.698 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.698 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:43.698 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:43.698 14:47:29 -- accel/accel.sh@20 -- # val= 00:06:43.698 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.698 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:43.698 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:43.698 14:47:29 -- accel/accel.sh@20 -- # val= 00:06:43.698 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.698 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:43.698 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:43.698 14:47:29 -- accel/accel.sh@20 -- # val= 00:06:43.698 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.698 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:43.698 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:43.698 14:47:29 -- accel/accel.sh@20 -- # val= 00:06:43.698 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.698 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:43.698 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:43.698 14:47:29 -- accel/accel.sh@20 -- # val= 00:06:43.698 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.698 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:43.698 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:43.698 14:47:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.698 14:47:29 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.698 14:47:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.698 00:06:43.698 real 0m1.412s 00:06:43.698 user 0m1.256s 00:06:43.698 sys 0m0.157s 00:06:43.698 14:47:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:43.698 14:47:29 -- common/autotest_common.sh@10 -- # set +x 00:06:43.698 ************************************ 00:06:43.698 END TEST accel_decomp 00:06:43.698 ************************************ 00:06:43.957 14:47:29 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:43.957 14:47:29 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:43.957 14:47:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.957 14:47:29 -- common/autotest_common.sh@10 -- # set +x 00:06:43.957 ************************************ 00:06:43.957 START TEST accel_decmop_full 00:06:43.957 ************************************ 00:06:43.957 14:47:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:43.957 14:47:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.957 14:47:29 -- accel/accel.sh@17 -- # local accel_module 00:06:43.957 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:43.957 14:47:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:43.957 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:43.957 14:47:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:43.957 14:47:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.957 14:47:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.957 14:47:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.957 14:47:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.957 14:47:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.957 14:47:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.957 14:47:29 -- accel/accel.sh@40 -- # local IFS=, 00:06:43.957 14:47:29 -- accel/accel.sh@41 -- # jq -r . 00:06:43.957 [2024-04-26 14:47:29.551989] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:43.957 [2024-04-26 14:47:29.552079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3662583 ] 00:06:43.957 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.957 [2024-04-26 14:47:29.583171] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:43.957 [2024-04-26 14:47:29.615014] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.215 [2024-04-26 14:47:29.708260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.215 14:47:29 -- accel/accel.sh@20 -- # val= 00:06:44.215 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 14:47:29 -- accel/accel.sh@20 -- # val= 00:06:44.215 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 14:47:29 -- accel/accel.sh@20 -- # val= 00:06:44.215 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 14:47:29 -- accel/accel.sh@20 -- # val=0x1 00:06:44.215 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 14:47:29 -- accel/accel.sh@20 -- # val= 00:06:44.215 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:44.216 14:47:29 -- accel/accel.sh@20 -- # val= 00:06:44.216 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:44.216 14:47:29 -- accel/accel.sh@20 -- # val=decompress 00:06:44.216 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.216 14:47:29 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:44.216 14:47:29 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:44.216 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:44.216 14:47:29 -- accel/accel.sh@20 -- # val= 00:06:44.216 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:44.216 14:47:29 -- accel/accel.sh@20 -- # val=software 00:06:44.216 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.216 14:47:29 -- accel/accel.sh@22 -- # accel_module=software 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:44.216 14:47:29 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.216 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:44.216 14:47:29 -- accel/accel.sh@20 -- # val=32 00:06:44.216 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:44.216 14:47:29 -- accel/accel.sh@20 -- # val=32 00:06:44.216 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:44.216 14:47:29 -- accel/accel.sh@20 -- # val=1 00:06:44.216 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:44.216 14:47:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.216 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:44.216 14:47:29 -- accel/accel.sh@20 -- # val=Yes 00:06:44.216 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:44.216 14:47:29 -- accel/accel.sh@20 -- # val= 00:06:44.216 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:44.216 14:47:29 -- accel/accel.sh@20 -- # val= 00:06:44.216 14:47:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # IFS=: 00:06:44.216 14:47:29 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:30 -- accel/accel.sh@20 -- # val= 00:06:45.589 14:47:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:47:30 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:30 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:30 -- accel/accel.sh@20 -- # val= 00:06:45.589 14:47:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:47:30 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:30 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:30 -- accel/accel.sh@20 -- # val= 00:06:45.589 14:47:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:47:30 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:30 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:30 -- accel/accel.sh@20 -- # val= 00:06:45.589 14:47:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:47:30 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:30 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:30 -- accel/accel.sh@20 -- # val= 00:06:45.589 14:47:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:47:30 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:30 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:30 -- accel/accel.sh@20 -- # val= 00:06:45.589 14:47:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:47:30 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:30 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.589 14:47:30 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:45.589 14:47:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.589 00:06:45.589 real 0m1.417s 00:06:45.589 user 0m1.268s 00:06:45.589 sys 0m0.150s 00:06:45.589 14:47:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:45.589 14:47:30 -- common/autotest_common.sh@10 -- # set +x 00:06:45.589 ************************************ 00:06:45.589 END TEST accel_decmop_full 00:06:45.589 ************************************ 00:06:45.589 14:47:30 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:45.589 14:47:30 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:45.589 14:47:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.589 14:47:30 -- common/autotest_common.sh@10 -- # set +x 00:06:45.589 ************************************ 00:06:45.589 START TEST accel_decomp_mcore 00:06:45.589 ************************************ 00:06:45.589 14:47:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:45.589 14:47:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.589 14:47:31 -- accel/accel.sh@17 -- # local accel_module 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:45.589 14:47:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.589 14:47:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.589 14:47:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.589 14:47:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.589 14:47:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.589 14:47:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.589 14:47:31 -- accel/accel.sh@40 -- # local IFS=, 00:06:45.589 14:47:31 -- accel/accel.sh@41 -- # jq -r . 00:06:45.589 [2024-04-26 14:47:31.082349] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:45.589 [2024-04-26 14:47:31.082415] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3662755 ] 00:06:45.589 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.589 [2024-04-26 14:47:31.115730] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:45.589 [2024-04-26 14:47:31.146006] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.589 [2024-04-26 14:47:31.240433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.589 [2024-04-26 14:47:31.240483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.589 [2024-04-26 14:47:31.240596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.589 [2024-04-26 14:47:31.240598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.589 14:47:31 -- accel/accel.sh@20 -- # val= 00:06:45.589 14:47:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:31 -- accel/accel.sh@20 -- # val= 00:06:45.589 14:47:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:31 -- accel/accel.sh@20 -- # val= 00:06:45.589 14:47:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:31 -- accel/accel.sh@20 -- # val=0xf 00:06:45.589 14:47:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:31 -- accel/accel.sh@20 -- # val= 00:06:45.589 14:47:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:31 -- accel/accel.sh@20 -- # val= 00:06:45.589 14:47:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:31 -- accel/accel.sh@20 -- # val=decompress 00:06:45.589 14:47:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:47:31 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.589 14:47:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:31 -- accel/accel.sh@20 -- # val= 00:06:45.589 14:47:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:31 -- accel/accel.sh@20 -- # val=software 00:06:45.589 14:47:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:47:31 -- accel/accel.sh@22 -- # accel_module=software 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:31 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:45.589 14:47:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:31 -- accel/accel.sh@20 -- # val=32 00:06:45.589 14:47:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:31 -- accel/accel.sh@20 -- # val=32 00:06:45.589 14:47:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:47:31 -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:47:31 -- accel/accel.sh@20 -- # val=1 00:06:45.589 14:47:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.590 14:47:31 -- accel/accel.sh@19 -- # IFS=: 00:06:45.590 14:47:31 -- accel/accel.sh@19 -- # read -r var val 00:06:45.590 14:47:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.590 14:47:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.590 14:47:31 -- accel/accel.sh@19 -- # IFS=: 00:06:45.590 14:47:31 -- accel/accel.sh@19 -- # read -r var val 00:06:45.590 14:47:31 -- accel/accel.sh@20 -- # val=Yes 00:06:45.590 14:47:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.590 14:47:31 -- accel/accel.sh@19 -- # IFS=: 00:06:45.590 14:47:31 -- accel/accel.sh@19 -- # read -r var val 00:06:45.590 14:47:31 -- accel/accel.sh@20 -- # val= 00:06:45.590 14:47:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.590 14:47:31 -- accel/accel.sh@19 -- # IFS=: 00:06:45.590 14:47:31 -- accel/accel.sh@19 -- # read -r var val 00:06:45.590 14:47:31 -- accel/accel.sh@20 -- # val= 00:06:45.590 14:47:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.590 14:47:31 -- accel/accel.sh@19 -- # IFS=: 00:06:45.590 14:47:31 -- accel/accel.sh@19 -- # read -r var val 00:06:46.962 14:47:32 -- accel/accel.sh@20 -- # val= 00:06:46.962 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:46.962 14:47:32 -- accel/accel.sh@20 -- # val= 00:06:46.962 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:46.962 14:47:32 -- accel/accel.sh@20 -- # val= 00:06:46.962 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:46.962 14:47:32 -- accel/accel.sh@20 -- # val= 00:06:46.962 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:46.962 14:47:32 -- accel/accel.sh@20 -- # val= 00:06:46.962 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:46.962 14:47:32 -- accel/accel.sh@20 -- # val= 00:06:46.962 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:46.962 14:47:32 -- accel/accel.sh@20 -- # val= 00:06:46.962 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:46.962 14:47:32 -- accel/accel.sh@20 -- # val= 00:06:46.962 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:46.962 14:47:32 -- accel/accel.sh@20 -- # val= 00:06:46.962 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:46.962 14:47:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.962 14:47:32 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:46.962 14:47:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.962 00:06:46.962 real 0m1.402s 00:06:46.962 user 0m4.681s 00:06:46.962 sys 0m0.143s 00:06:46.962 14:47:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:46.962 14:47:32 -- common/autotest_common.sh@10 -- # set +x 00:06:46.962 ************************************ 00:06:46.962 END TEST accel_decomp_mcore 00:06:46.962 ************************************ 00:06:46.962 14:47:32 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.962 14:47:32 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:46.962 14:47:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.962 14:47:32 -- common/autotest_common.sh@10 -- # set +x 00:06:46.962 ************************************ 00:06:46.962 START TEST accel_decomp_full_mcore 00:06:46.962 ************************************ 00:06:46.962 14:47:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.962 14:47:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.962 14:47:32 -- accel/accel.sh@17 -- # local accel_module 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:46.962 14:47:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.962 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:46.962 14:47:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.962 14:47:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.962 14:47:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.962 14:47:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.962 14:47:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.962 14:47:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.962 14:47:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.962 14:47:32 -- accel/accel.sh@40 -- # local IFS=, 00:06:46.962 14:47:32 -- accel/accel.sh@41 -- # jq -r . 00:06:46.962 [2024-04-26 14:47:32.605832] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:46.962 [2024-04-26 14:47:32.605894] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663031 ] 00:06:46.962 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.962 [2024-04-26 14:47:32.639271] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:46.962 [2024-04-26 14:47:32.669537] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:47.220 [2024-04-26 14:47:32.765034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.221 [2024-04-26 14:47:32.765077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.221 [2024-04-26 14:47:32.765195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.221 [2024-04-26 14:47:32.765197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.221 14:47:32 -- accel/accel.sh@20 -- # val= 00:06:47.221 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:47.221 14:47:32 -- accel/accel.sh@20 -- # val= 00:06:47.221 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:47.221 14:47:32 -- accel/accel.sh@20 -- # val= 00:06:47.221 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:47.221 14:47:32 -- accel/accel.sh@20 -- # val=0xf 00:06:47.221 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:47.221 14:47:32 -- accel/accel.sh@20 -- # val= 00:06:47.221 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:47.221 14:47:32 -- accel/accel.sh@20 -- # val= 00:06:47.221 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:47.221 14:47:32 -- accel/accel.sh@20 -- # val=decompress 00:06:47.221 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.221 14:47:32 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:47.221 14:47:32 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:47.221 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:47.221 14:47:32 -- accel/accel.sh@20 -- # val= 00:06:47.221 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:47.221 14:47:32 -- accel/accel.sh@20 -- # val=software 00:06:47.221 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.221 14:47:32 -- accel/accel.sh@22 -- # accel_module=software 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:47.221 14:47:32 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.221 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:47.221 14:47:32 -- accel/accel.sh@20 -- # val=32 00:06:47.221 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:47.221 14:47:32 -- accel/accel.sh@20 -- # val=32 00:06:47.221 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:47.221 14:47:32 -- accel/accel.sh@20 -- # val=1 00:06:47.221 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:47.221 14:47:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.221 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:47.221 14:47:32 -- accel/accel.sh@20 -- # val=Yes 00:06:47.221 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:47.221 14:47:32 -- accel/accel.sh@20 -- # val= 00:06:47.221 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:47.221 14:47:32 -- accel/accel.sh@20 -- # val= 00:06:47.221 14:47:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # IFS=: 00:06:47.221 14:47:32 -- accel/accel.sh@19 -- # read -r var val 00:06:48.593 14:47:34 -- accel/accel.sh@20 -- # val= 00:06:48.593 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.593 14:47:34 -- accel/accel.sh@20 -- # val= 00:06:48.593 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.593 14:47:34 -- accel/accel.sh@20 -- # val= 00:06:48.593 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.593 14:47:34 -- accel/accel.sh@20 -- # val= 00:06:48.593 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.593 14:47:34 -- accel/accel.sh@20 -- # val= 00:06:48.593 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.593 14:47:34 -- accel/accel.sh@20 -- # val= 00:06:48.593 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.593 14:47:34 -- accel/accel.sh@20 -- # val= 00:06:48.593 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.593 14:47:34 -- accel/accel.sh@20 -- # val= 00:06:48.593 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.593 14:47:34 -- accel/accel.sh@20 -- # val= 00:06:48.593 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.593 14:47:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.593 14:47:34 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:48.593 14:47:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.593 00:06:48.593 real 0m1.429s 00:06:48.593 user 0m4.762s 00:06:48.593 sys 0m0.153s 00:06:48.593 14:47:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:48.593 14:47:34 -- common/autotest_common.sh@10 -- # set +x 00:06:48.593 ************************************ 00:06:48.593 END TEST accel_decomp_full_mcore 00:06:48.593 ************************************ 00:06:48.593 14:47:34 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:48.593 14:47:34 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:48.593 14:47:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.593 14:47:34 -- common/autotest_common.sh@10 -- # set +x 00:06:48.593 ************************************ 00:06:48.593 START TEST accel_decomp_mthread 00:06:48.593 ************************************ 00:06:48.593 14:47:34 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:48.593 14:47:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.593 14:47:34 -- accel/accel.sh@17 -- # local accel_module 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.593 14:47:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:48.593 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.593 14:47:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:48.593 14:47:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.593 14:47:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.593 14:47:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.593 14:47:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.593 14:47:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.593 14:47:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.593 14:47:34 -- accel/accel.sh@40 -- # local IFS=, 00:06:48.593 14:47:34 -- accel/accel.sh@41 -- # jq -r . 00:06:48.593 [2024-04-26 14:47:34.157269] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:48.593 [2024-04-26 14:47:34.157334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663203 ] 00:06:48.593 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.593 [2024-04-26 14:47:34.189476] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:48.593 [2024-04-26 14:47:34.219577] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.593 [2024-04-26 14:47:34.307727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.851 14:47:34 -- accel/accel.sh@20 -- # val= 00:06:48.851 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.851 14:47:34 -- accel/accel.sh@20 -- # val= 00:06:48.851 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.851 14:47:34 -- accel/accel.sh@20 -- # val= 00:06:48.851 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.851 14:47:34 -- accel/accel.sh@20 -- # val=0x1 00:06:48.851 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.851 14:47:34 -- accel/accel.sh@20 -- # val= 00:06:48.851 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.851 14:47:34 -- accel/accel.sh@20 -- # val= 00:06:48.851 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.851 14:47:34 -- accel/accel.sh@20 -- # val=decompress 00:06:48.851 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.851 14:47:34 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.851 14:47:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.851 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.851 14:47:34 -- accel/accel.sh@20 -- # val= 00:06:48.851 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.851 14:47:34 -- accel/accel.sh@20 -- # val=software 00:06:48.851 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.851 14:47:34 -- accel/accel.sh@22 -- # accel_module=software 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.851 14:47:34 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:48.851 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.851 14:47:34 -- accel/accel.sh@20 -- # val=32 00:06:48.851 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.851 14:47:34 -- accel/accel.sh@20 -- # val=32 00:06:48.851 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.851 14:47:34 -- accel/accel.sh@20 -- # val=2 00:06:48.851 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.851 14:47:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.851 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.851 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.851 14:47:34 -- accel/accel.sh@20 -- # val=Yes 00:06:48.852 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.852 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.852 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.852 14:47:34 -- accel/accel.sh@20 -- # val= 00:06:48.852 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.852 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.852 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:48.852 14:47:34 -- accel/accel.sh@20 -- # val= 00:06:48.852 14:47:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.852 14:47:34 -- accel/accel.sh@19 -- # IFS=: 00:06:48.852 14:47:34 -- accel/accel.sh@19 -- # read -r var val 00:06:50.223 14:47:35 -- accel/accel.sh@20 -- # val= 00:06:50.223 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.223 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.223 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.223 14:47:35 -- accel/accel.sh@20 -- # val= 00:06:50.223 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.223 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.223 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.223 14:47:35 -- accel/accel.sh@20 -- # val= 00:06:50.223 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.223 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.223 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.223 14:47:35 -- accel/accel.sh@20 -- # val= 00:06:50.223 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.223 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.223 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.223 14:47:35 -- accel/accel.sh@20 -- # val= 00:06:50.223 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.223 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.223 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.223 14:47:35 -- accel/accel.sh@20 -- # val= 00:06:50.223 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.223 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.223 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.223 14:47:35 -- accel/accel.sh@20 -- # val= 00:06:50.223 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.223 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.223 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.223 14:47:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.223 14:47:35 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:50.223 14:47:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.223 00:06:50.223 real 0m1.409s 00:06:50.223 user 0m1.262s 00:06:50.223 sys 0m0.150s 00:06:50.223 14:47:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:50.223 14:47:35 -- common/autotest_common.sh@10 -- # set +x 00:06:50.223 ************************************ 00:06:50.223 END TEST accel_decomp_mthread 00:06:50.223 ************************************ 00:06:50.223 14:47:35 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:50.223 14:47:35 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:50.223 14:47:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.223 14:47:35 -- common/autotest_common.sh@10 -- # set +x 00:06:50.223 ************************************ 00:06:50.223 START TEST accel_deomp_full_mthread 00:06:50.223 ************************************ 00:06:50.223 14:47:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:50.223 14:47:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.223 14:47:35 -- accel/accel.sh@17 -- # local accel_module 00:06:50.223 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.223 14:47:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:50.223 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.223 14:47:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:50.223 14:47:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.223 14:47:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.223 14:47:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.223 14:47:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.223 14:47:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.223 14:47:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.223 14:47:35 -- accel/accel.sh@40 -- # local IFS=, 00:06:50.223 14:47:35 -- accel/accel.sh@41 -- # jq -r . 00:06:50.223 [2024-04-26 14:47:35.690761] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:50.223 [2024-04-26 14:47:35.690825] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663366 ] 00:06:50.223 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.223 [2024-04-26 14:47:35.724323] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:50.223 [2024-04-26 14:47:35.754856] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.223 [2024-04-26 14:47:35.845039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.223 14:47:35 -- accel/accel.sh@20 -- # val= 00:06:50.223 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.223 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.223 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.224 14:47:35 -- accel/accel.sh@20 -- # val= 00:06:50.224 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.224 14:47:35 -- accel/accel.sh@20 -- # val= 00:06:50.224 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.224 14:47:35 -- accel/accel.sh@20 -- # val=0x1 00:06:50.224 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.224 14:47:35 -- accel/accel.sh@20 -- # val= 00:06:50.224 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.224 14:47:35 -- accel/accel.sh@20 -- # val= 00:06:50.224 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.224 14:47:35 -- accel/accel.sh@20 -- # val=decompress 00:06:50.224 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.224 14:47:35 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.224 14:47:35 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:50.224 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.224 14:47:35 -- accel/accel.sh@20 -- # val= 00:06:50.224 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.224 14:47:35 -- accel/accel.sh@20 -- # val=software 00:06:50.224 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.224 14:47:35 -- accel/accel.sh@22 -- # accel_module=software 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.224 14:47:35 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.224 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.224 14:47:35 -- accel/accel.sh@20 -- # val=32 00:06:50.224 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.224 14:47:35 -- accel/accel.sh@20 -- # val=32 00:06:50.224 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.224 14:47:35 -- accel/accel.sh@20 -- # val=2 00:06:50.224 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.224 14:47:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.224 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.224 14:47:35 -- accel/accel.sh@20 -- # val=Yes 00:06:50.224 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.224 14:47:35 -- accel/accel.sh@20 -- # val= 00:06:50.224 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:50.224 14:47:35 -- accel/accel.sh@20 -- # val= 00:06:50.224 14:47:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # IFS=: 00:06:50.224 14:47:35 -- accel/accel.sh@19 -- # read -r var val 00:06:51.596 14:47:37 -- accel/accel.sh@20 -- # val= 00:06:51.597 14:47:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.597 14:47:37 -- accel/accel.sh@19 -- # IFS=: 00:06:51.597 14:47:37 -- accel/accel.sh@19 -- # read -r var val 00:06:51.597 14:47:37 -- accel/accel.sh@20 -- # val= 00:06:51.597 14:47:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.597 14:47:37 -- accel/accel.sh@19 -- # IFS=: 00:06:51.597 14:47:37 -- accel/accel.sh@19 -- # read -r var val 00:06:51.597 14:47:37 -- accel/accel.sh@20 -- # val= 00:06:51.597 14:47:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.597 14:47:37 -- accel/accel.sh@19 -- # IFS=: 00:06:51.597 14:47:37 -- accel/accel.sh@19 -- # read -r var val 00:06:51.597 14:47:37 -- accel/accel.sh@20 -- # val= 00:06:51.597 14:47:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.597 14:47:37 -- accel/accel.sh@19 -- # IFS=: 00:06:51.597 14:47:37 -- accel/accel.sh@19 -- # read -r var val 00:06:51.597 14:47:37 -- accel/accel.sh@20 -- # val= 00:06:51.597 14:47:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.597 14:47:37 -- accel/accel.sh@19 -- # IFS=: 00:06:51.597 14:47:37 -- accel/accel.sh@19 -- # read -r var val 00:06:51.597 14:47:37 -- accel/accel.sh@20 -- # val= 00:06:51.597 14:47:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.597 14:47:37 -- accel/accel.sh@19 -- # IFS=: 00:06:51.597 14:47:37 -- accel/accel.sh@19 -- # read -r var val 00:06:51.597 14:47:37 -- accel/accel.sh@20 -- # val= 00:06:51.597 14:47:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.597 14:47:37 -- accel/accel.sh@19 -- # IFS=: 00:06:51.597 14:47:37 -- accel/accel.sh@19 -- # read -r var val 00:06:51.597 14:47:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.597 14:47:37 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:51.597 14:47:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.597 00:06:51.597 real 0m1.433s 00:06:51.597 user 0m1.290s 00:06:51.597 sys 0m0.146s 00:06:51.597 14:47:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:51.597 14:47:37 -- common/autotest_common.sh@10 -- # set +x 00:06:51.597 ************************************ 00:06:51.597 END TEST accel_deomp_full_mthread 00:06:51.597 ************************************ 00:06:51.597 14:47:37 -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:51.597 14:47:37 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:51.597 14:47:37 -- accel/accel.sh@137 -- # build_accel_config 00:06:51.597 14:47:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.597 14:47:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:51.597 14:47:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.597 14:47:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.597 14:47:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.597 14:47:37 -- common/autotest_common.sh@10 -- # set +x 00:06:51.597 14:47:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.597 14:47:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.597 14:47:37 -- accel/accel.sh@40 -- # local IFS=, 00:06:51.597 14:47:37 -- accel/accel.sh@41 -- # jq -r . 00:06:51.597 ************************************ 00:06:51.597 START TEST accel_dif_functional_tests 00:06:51.597 ************************************ 00:06:51.597 14:47:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:51.597 [2024-04-26 14:47:37.262427] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:51.597 [2024-04-26 14:47:37.262504] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663648 ] 00:06:51.597 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.597 [2024-04-26 14:47:37.293148] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:51.597 [2024-04-26 14:47:37.323307] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.855 [2024-04-26 14:47:37.416878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.855 [2024-04-26 14:47:37.416946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.855 [2024-04-26 14:47:37.416948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.855 00:06:51.855 00:06:51.855 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.855 http://cunit.sourceforge.net/ 00:06:51.855 00:06:51.855 00:06:51.855 Suite: accel_dif 00:06:51.855 Test: verify: DIF generated, GUARD check ...passed 00:06:51.855 Test: verify: DIF generated, APPTAG check ...passed 00:06:51.855 Test: verify: DIF generated, REFTAG check ...passed 00:06:51.855 Test: verify: DIF not generated, GUARD check ...[2024-04-26 14:47:37.510692] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:51.855 [2024-04-26 14:47:37.510775] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:51.855 passed 00:06:51.855 Test: verify: DIF not generated, APPTAG check ...[2024-04-26 14:47:37.510812] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:51.855 [2024-04-26 14:47:37.510837] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:51.855 passed 00:06:51.855 Test: verify: DIF not generated, REFTAG check ...[2024-04-26 14:47:37.510867] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:51.855 [2024-04-26 14:47:37.510893] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:51.855 passed 00:06:51.855 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:51.855 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-26 14:47:37.510953] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:51.855 passed 00:06:51.855 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:51.855 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:51.855 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:51.855 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-26 14:47:37.511109] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:51.855 passed 00:06:51.855 Test: generate copy: DIF generated, GUARD check ...passed 00:06:51.855 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:51.855 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:51.855 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:51.855 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:51.855 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:51.855 Test: generate copy: iovecs-len validate ...[2024-04-26 14:47:37.511356] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:51.855 passed 00:06:51.855 Test: generate copy: buffer alignment validate ...passed 00:06:51.855 00:06:51.855 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.855 suites 1 1 n/a 0 0 00:06:51.855 tests 20 20 20 0 0 00:06:51.855 asserts 204 204 204 0 n/a 00:06:51.855 00:06:51.855 Elapsed time = 0.002 seconds 00:06:52.113 00:06:52.113 real 0m0.489s 00:06:52.113 user 0m0.727s 00:06:52.113 sys 0m0.175s 00:06:52.113 14:47:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:52.113 14:47:37 -- common/autotest_common.sh@10 -- # set +x 00:06:52.113 ************************************ 00:06:52.113 END TEST accel_dif_functional_tests 00:06:52.113 ************************************ 00:06:52.113 00:06:52.113 real 0m33.648s 00:06:52.113 user 0m35.716s 00:06:52.113 sys 0m5.546s 00:06:52.113 14:47:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:52.113 14:47:37 -- common/autotest_common.sh@10 -- # set +x 00:06:52.113 ************************************ 00:06:52.113 END TEST accel 00:06:52.113 ************************************ 00:06:52.113 14:47:37 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:52.113 14:47:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:52.113 14:47:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.113 14:47:37 -- common/autotest_common.sh@10 -- # set +x 00:06:52.371 ************************************ 00:06:52.371 START TEST accel_rpc 00:06:52.371 ************************************ 00:06:52.371 14:47:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:52.371 * Looking for test storage... 00:06:52.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:52.371 14:47:37 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:52.371 14:47:37 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3663729 00:06:52.371 14:47:37 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:52.371 14:47:37 -- accel/accel_rpc.sh@15 -- # waitforlisten 3663729 00:06:52.371 14:47:37 -- common/autotest_common.sh@817 -- # '[' -z 3663729 ']' 00:06:52.371 14:47:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.371 14:47:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:52.371 14:47:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.371 14:47:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:52.371 14:47:37 -- common/autotest_common.sh@10 -- # set +x 00:06:52.371 [2024-04-26 14:47:37.962149] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:52.371 [2024-04-26 14:47:37.962253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663729 ] 00:06:52.371 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.371 [2024-04-26 14:47:37.994926] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:52.371 [2024-04-26 14:47:38.022226] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.371 [2024-04-26 14:47:38.108987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.629 14:47:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:52.629 14:47:38 -- common/autotest_common.sh@850 -- # return 0 00:06:52.629 14:47:38 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:52.629 14:47:38 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:52.629 14:47:38 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:52.629 14:47:38 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:52.629 14:47:38 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:52.629 14:47:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:52.629 14:47:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.629 14:47:38 -- common/autotest_common.sh@10 -- # set +x 00:06:52.629 ************************************ 00:06:52.629 START TEST accel_assign_opcode 00:06:52.629 ************************************ 00:06:52.629 14:47:38 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:06:52.629 14:47:38 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:52.629 14:47:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.629 14:47:38 -- common/autotest_common.sh@10 -- # set +x 00:06:52.629 [2024-04-26 14:47:38.261901] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:52.629 14:47:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.629 14:47:38 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:52.629 14:47:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.629 14:47:38 -- common/autotest_common.sh@10 -- # set +x 00:06:52.629 [2024-04-26 14:47:38.269904] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:52.629 14:47:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.629 14:47:38 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:52.629 14:47:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.629 14:47:38 -- common/autotest_common.sh@10 -- # set +x 00:06:52.886 14:47:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.886 14:47:38 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:52.886 14:47:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.886 14:47:38 -- common/autotest_common.sh@10 -- # set +x 00:06:52.886 14:47:38 -- accel/accel_rpc.sh@42 -- # grep software 00:06:52.886 14:47:38 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:52.886 14:47:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.886 software 00:06:52.886 00:06:52.886 real 0m0.299s 00:06:52.886 user 0m0.039s 00:06:52.886 sys 0m0.008s 00:06:52.886 14:47:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:52.886 14:47:38 -- common/autotest_common.sh@10 -- # set +x 00:06:52.886 ************************************ 00:06:52.886 END TEST accel_assign_opcode 00:06:52.886 ************************************ 00:06:52.886 14:47:38 -- accel/accel_rpc.sh@55 -- # killprocess 3663729 00:06:52.886 14:47:38 -- common/autotest_common.sh@936 -- # '[' -z 3663729 ']' 00:06:52.886 14:47:38 -- common/autotest_common.sh@940 -- # kill -0 3663729 00:06:52.886 14:47:38 -- common/autotest_common.sh@941 -- # uname 00:06:52.886 14:47:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:52.886 14:47:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3663729 00:06:52.886 14:47:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:52.886 14:47:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:52.886 14:47:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3663729' 00:06:52.886 killing process with pid 3663729 00:06:52.886 14:47:38 -- common/autotest_common.sh@955 -- # kill 3663729 00:06:52.886 14:47:38 -- common/autotest_common.sh@960 -- # wait 3663729 00:06:53.452 00:06:53.452 real 0m1.151s 00:06:53.452 user 0m1.110s 00:06:53.452 sys 0m0.454s 00:06:53.452 14:47:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:53.452 14:47:39 -- common/autotest_common.sh@10 -- # set +x 00:06:53.452 ************************************ 00:06:53.452 END TEST accel_rpc 00:06:53.452 ************************************ 00:06:53.452 14:47:39 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:53.452 14:47:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:53.452 14:47:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.452 14:47:39 -- common/autotest_common.sh@10 -- # set +x 00:06:53.452 ************************************ 00:06:53.452 START TEST app_cmdline 00:06:53.452 ************************************ 00:06:53.452 14:47:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:53.452 * Looking for test storage... 00:06:53.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:53.452 14:47:39 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:53.452 14:47:39 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3664017 00:06:53.452 14:47:39 -- app/cmdline.sh@18 -- # waitforlisten 3664017 00:06:53.452 14:47:39 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:53.452 14:47:39 -- common/autotest_common.sh@817 -- # '[' -z 3664017 ']' 00:06:53.452 14:47:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.452 14:47:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:53.452 14:47:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.452 14:47:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:53.452 14:47:39 -- common/autotest_common.sh@10 -- # set +x 00:06:53.732 [2024-04-26 14:47:39.238822] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:06:53.732 [2024-04-26 14:47:39.238906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664017 ] 00:06:53.732 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.732 [2024-04-26 14:47:39.271511] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:53.732 [2024-04-26 14:47:39.315508] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.732 [2024-04-26 14:47:39.414301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.989 14:47:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:53.989 14:47:39 -- common/autotest_common.sh@850 -- # return 0 00:06:53.989 14:47:39 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:54.246 { 00:06:54.246 "version": "SPDK v24.05-pre git sha1 8571999d8", 00:06:54.246 "fields": { 00:06:54.246 "major": 24, 00:06:54.246 "minor": 5, 00:06:54.246 "patch": 0, 00:06:54.246 "suffix": "-pre", 00:06:54.246 "commit": "8571999d8" 00:06:54.246 } 00:06:54.246 } 00:06:54.246 14:47:39 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:54.246 14:47:39 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:54.246 14:47:39 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:54.246 14:47:39 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:54.246 14:47:39 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:54.246 14:47:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:54.246 14:47:39 -- common/autotest_common.sh@10 -- # set +x 00:06:54.246 14:47:39 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:54.246 14:47:39 -- app/cmdline.sh@26 -- # sort 00:06:54.246 14:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:54.246 14:47:39 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:54.246 14:47:39 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:54.246 14:47:39 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.246 14:47:39 -- common/autotest_common.sh@638 -- # local es=0 00:06:54.246 14:47:39 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.246 14:47:39 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:54.246 14:47:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:54.246 14:47:39 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:54.246 14:47:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:54.246 14:47:39 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:54.246 14:47:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:54.246 14:47:39 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:54.246 14:47:39 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:54.246 14:47:39 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.503 request: 00:06:54.503 { 00:06:54.503 "method": "env_dpdk_get_mem_stats", 00:06:54.503 "req_id": 1 00:06:54.503 } 00:06:54.503 Got JSON-RPC error response 00:06:54.503 response: 00:06:54.503 { 00:06:54.503 "code": -32601, 00:06:54.503 "message": "Method not found" 00:06:54.503 } 00:06:54.503 14:47:40 -- common/autotest_common.sh@641 -- # es=1 00:06:54.503 14:47:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:54.503 14:47:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:54.503 14:47:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:54.503 14:47:40 -- app/cmdline.sh@1 -- # killprocess 3664017 00:06:54.503 14:47:40 -- common/autotest_common.sh@936 -- # '[' -z 3664017 ']' 00:06:54.503 14:47:40 -- common/autotest_common.sh@940 -- # kill -0 3664017 00:06:54.503 14:47:40 -- common/autotest_common.sh@941 -- # uname 00:06:54.503 14:47:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:54.503 14:47:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3664017 00:06:54.761 14:47:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:54.761 14:47:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:54.761 14:47:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3664017' 00:06:54.761 killing process with pid 3664017 00:06:54.761 14:47:40 -- common/autotest_common.sh@955 -- # kill 3664017 00:06:54.761 14:47:40 -- common/autotest_common.sh@960 -- # wait 3664017 00:06:55.019 00:06:55.019 real 0m1.524s 00:06:55.019 user 0m1.949s 00:06:55.019 sys 0m0.483s 00:06:55.019 14:47:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:55.019 14:47:40 -- common/autotest_common.sh@10 -- # set +x 00:06:55.019 ************************************ 00:06:55.019 END TEST app_cmdline 00:06:55.019 ************************************ 00:06:55.019 14:47:40 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:55.019 14:47:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:55.019 14:47:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.019 14:47:40 -- common/autotest_common.sh@10 -- # set +x 00:06:55.276 ************************************ 00:06:55.276 START TEST version 00:06:55.276 ************************************ 00:06:55.276 14:47:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:55.276 * Looking for test storage... 00:06:55.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:55.276 14:47:40 -- app/version.sh@17 -- # get_header_version major 00:06:55.276 14:47:40 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:55.276 14:47:40 -- app/version.sh@14 -- # cut -f2 00:06:55.276 14:47:40 -- app/version.sh@14 -- # tr -d '"' 00:06:55.276 14:47:40 -- app/version.sh@17 -- # major=24 00:06:55.276 14:47:40 -- app/version.sh@18 -- # get_header_version minor 00:06:55.276 14:47:40 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:55.276 14:47:40 -- app/version.sh@14 -- # cut -f2 00:06:55.276 14:47:40 -- app/version.sh@14 -- # tr -d '"' 00:06:55.276 14:47:40 -- app/version.sh@18 -- # minor=5 00:06:55.276 14:47:40 -- app/version.sh@19 -- # get_header_version patch 00:06:55.276 14:47:40 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:55.276 14:47:40 -- app/version.sh@14 -- # cut -f2 00:06:55.276 14:47:40 -- app/version.sh@14 -- # tr -d '"' 00:06:55.276 14:47:40 -- app/version.sh@19 -- # patch=0 00:06:55.276 14:47:40 -- app/version.sh@20 -- # get_header_version suffix 00:06:55.276 14:47:40 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:55.276 14:47:40 -- app/version.sh@14 -- # cut -f2 00:06:55.276 14:47:40 -- app/version.sh@14 -- # tr -d '"' 00:06:55.276 14:47:40 -- app/version.sh@20 -- # suffix=-pre 00:06:55.276 14:47:40 -- app/version.sh@22 -- # version=24.5 00:06:55.276 14:47:40 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:55.276 14:47:40 -- app/version.sh@28 -- # version=24.5rc0 00:06:55.276 14:47:40 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:55.276 14:47:40 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:55.276 14:47:40 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:55.276 14:47:40 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:55.276 00:06:55.276 real 0m0.114s 00:06:55.276 user 0m0.063s 00:06:55.276 sys 0m0.072s 00:06:55.276 14:47:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:55.276 14:47:40 -- common/autotest_common.sh@10 -- # set +x 00:06:55.276 ************************************ 00:06:55.276 END TEST version 00:06:55.276 ************************************ 00:06:55.276 14:47:40 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:55.276 14:47:40 -- spdk/autotest.sh@194 -- # uname -s 00:06:55.276 14:47:40 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:55.276 14:47:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:55.276 14:47:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:55.276 14:47:40 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:55.276 14:47:40 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:55.276 14:47:40 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:55.276 14:47:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:55.276 14:47:40 -- common/autotest_common.sh@10 -- # set +x 00:06:55.276 14:47:40 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:55.276 14:47:40 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:55.276 14:47:40 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:55.276 14:47:40 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:55.276 14:47:40 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:06:55.276 14:47:40 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:06:55.276 14:47:40 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:55.276 14:47:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:55.276 14:47:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.276 14:47:40 -- common/autotest_common.sh@10 -- # set +x 00:06:55.534 ************************************ 00:06:55.534 START TEST nvmf_tcp 00:06:55.534 ************************************ 00:06:55.534 14:47:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:55.534 * Looking for test storage... 00:06:55.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:55.534 14:47:41 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:55.534 14:47:41 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:55.534 14:47:41 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:55.534 14:47:41 -- nvmf/common.sh@7 -- # uname -s 00:06:55.534 14:47:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.534 14:47:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.534 14:47:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.534 14:47:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.534 14:47:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.534 14:47:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.534 14:47:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.534 14:47:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.534 14:47:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.534 14:47:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.534 14:47:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:55.534 14:47:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:55.534 14:47:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.534 14:47:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.534 14:47:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.534 14:47:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.534 14:47:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.534 14:47:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.534 14:47:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.534 14:47:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.534 14:47:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.534 14:47:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.534 14:47:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.534 14:47:41 -- paths/export.sh@5 -- # export PATH 00:06:55.534 14:47:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.534 14:47:41 -- nvmf/common.sh@47 -- # : 0 00:06:55.534 14:47:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:55.534 14:47:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:55.534 14:47:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.534 14:47:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.534 14:47:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.534 14:47:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:55.534 14:47:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:55.534 14:47:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:55.534 14:47:41 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:55.534 14:47:41 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:55.534 14:47:41 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:55.534 14:47:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:55.534 14:47:41 -- common/autotest_common.sh@10 -- # set +x 00:06:55.534 14:47:41 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:55.534 14:47:41 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:55.534 14:47:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:55.534 14:47:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.534 14:47:41 -- common/autotest_common.sh@10 -- # set +x 00:06:55.534 ************************************ 00:06:55.534 START TEST nvmf_example 00:06:55.534 ************************************ 00:06:55.534 14:47:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:55.534 * Looking for test storage... 00:06:55.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.534 14:47:41 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:55.534 14:47:41 -- nvmf/common.sh@7 -- # uname -s 00:06:55.534 14:47:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.534 14:47:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.534 14:47:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.534 14:47:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.534 14:47:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.534 14:47:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.534 14:47:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.534 14:47:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.534 14:47:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.534 14:47:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.534 14:47:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:55.534 14:47:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:55.534 14:47:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.534 14:47:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.534 14:47:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.534 14:47:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.534 14:47:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.534 14:47:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.534 14:47:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.534 14:47:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.534 14:47:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.534 14:47:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.534 14:47:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.534 14:47:41 -- paths/export.sh@5 -- # export PATH 00:06:55.534 14:47:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.535 14:47:41 -- nvmf/common.sh@47 -- # : 0 00:06:55.535 14:47:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:55.535 14:47:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:55.535 14:47:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.535 14:47:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.535 14:47:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.535 14:47:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:55.535 14:47:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:55.535 14:47:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:55.535 14:47:41 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:55.535 14:47:41 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:55.535 14:47:41 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:55.535 14:47:41 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:55.535 14:47:41 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:55.535 14:47:41 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:55.535 14:47:41 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:55.535 14:47:41 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:55.535 14:47:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:55.535 14:47:41 -- common/autotest_common.sh@10 -- # set +x 00:06:55.535 14:47:41 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:55.535 14:47:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:55.535 14:47:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.535 14:47:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:55.535 14:47:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:55.535 14:47:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:55.535 14:47:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.535 14:47:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:55.535 14:47:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.535 14:47:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:55.535 14:47:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:55.535 14:47:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:55.535 14:47:41 -- common/autotest_common.sh@10 -- # set +x 00:06:58.060 14:47:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:58.060 14:47:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:58.060 14:47:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:58.060 14:47:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:58.060 14:47:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:58.060 14:47:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:58.060 14:47:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:58.060 14:47:43 -- nvmf/common.sh@295 -- # net_devs=() 00:06:58.060 14:47:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:58.060 14:47:43 -- nvmf/common.sh@296 -- # e810=() 00:06:58.060 14:47:43 -- nvmf/common.sh@296 -- # local -ga e810 00:06:58.060 14:47:43 -- nvmf/common.sh@297 -- # x722=() 00:06:58.060 14:47:43 -- nvmf/common.sh@297 -- # local -ga x722 00:06:58.060 14:47:43 -- nvmf/common.sh@298 -- # mlx=() 00:06:58.060 14:47:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:58.060 14:47:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:58.060 14:47:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:58.060 14:47:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:58.060 14:47:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:58.060 14:47:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:58.060 14:47:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:58.060 14:47:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:58.060 14:47:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:58.060 14:47:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:58.060 14:47:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:58.060 14:47:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:58.060 14:47:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:58.060 14:47:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:58.060 14:47:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:58.060 14:47:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:58.060 14:47:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:58.060 14:47:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:58.060 14:47:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:58.060 14:47:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:58.060 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:58.060 14:47:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:58.060 14:47:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:58.060 14:47:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.060 14:47:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.060 14:47:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:58.060 14:47:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:58.060 14:47:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:58.060 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:58.060 14:47:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:58.060 14:47:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:58.060 14:47:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.060 14:47:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.060 14:47:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:58.060 14:47:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:58.060 14:47:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:58.060 14:47:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:58.060 14:47:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:58.060 14:47:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.060 14:47:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:58.060 14:47:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.060 14:47:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:58.060 Found net devices under 0000:84:00.0: cvl_0_0 00:06:58.060 14:47:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.060 14:47:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:58.060 14:47:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.060 14:47:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:58.060 14:47:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.060 14:47:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:58.060 Found net devices under 0000:84:00.1: cvl_0_1 00:06:58.060 14:47:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.060 14:47:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:58.060 14:47:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:58.060 14:47:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:58.060 14:47:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:58.060 14:47:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:58.060 14:47:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.060 14:47:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.060 14:47:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:58.060 14:47:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:58.060 14:47:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:58.060 14:47:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:58.060 14:47:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:58.060 14:47:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:58.060 14:47:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.060 14:47:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:58.060 14:47:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:58.060 14:47:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:58.060 14:47:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:58.060 14:47:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:58.060 14:47:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:58.060 14:47:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:58.060 14:47:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:58.060 14:47:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:58.060 14:47:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:58.060 14:47:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:58.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:06:58.060 00:06:58.060 --- 10.0.0.2 ping statistics --- 00:06:58.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.060 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:06:58.060 14:47:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:58.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:06:58.060 00:06:58.060 --- 10.0.0.1 ping statistics --- 00:06:58.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.060 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:06:58.060 14:47:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.060 14:47:43 -- nvmf/common.sh@411 -- # return 0 00:06:58.060 14:47:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:58.060 14:47:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.060 14:47:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:58.060 14:47:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:58.060 14:47:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.060 14:47:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:58.060 14:47:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:58.060 14:47:43 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:58.060 14:47:43 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:58.060 14:47:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:58.060 14:47:43 -- common/autotest_common.sh@10 -- # set +x 00:06:58.060 14:47:43 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:58.060 14:47:43 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:58.060 14:47:43 -- target/nvmf_example.sh@34 -- # nvmfpid=3666007 00:06:58.060 14:47:43 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:58.060 14:47:43 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:58.060 14:47:43 -- target/nvmf_example.sh@36 -- # waitforlisten 3666007 00:06:58.061 14:47:43 -- common/autotest_common.sh@817 -- # '[' -z 3666007 ']' 00:06:58.061 14:47:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.061 14:47:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:58.061 14:47:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.061 14:47:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:58.061 14:47:43 -- common/autotest_common.sh@10 -- # set +x 00:06:58.061 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.061 14:47:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:58.061 14:47:43 -- common/autotest_common.sh@850 -- # return 0 00:06:58.061 14:47:43 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:58.061 14:47:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:58.061 14:47:43 -- common/autotest_common.sh@10 -- # set +x 00:06:58.061 14:47:43 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:58.061 14:47:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:58.061 14:47:43 -- common/autotest_common.sh@10 -- # set +x 00:06:58.061 14:47:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:58.061 14:47:43 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:58.061 14:47:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:58.061 14:47:43 -- common/autotest_common.sh@10 -- # set +x 00:06:58.323 14:47:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:58.323 14:47:43 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:58.323 14:47:43 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:58.323 14:47:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:58.323 14:47:43 -- common/autotest_common.sh@10 -- # set +x 00:06:58.323 14:47:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:58.323 14:47:43 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:58.323 14:47:43 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:58.323 14:47:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:58.323 14:47:43 -- common/autotest_common.sh@10 -- # set +x 00:06:58.323 14:47:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:58.323 14:47:43 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:58.323 14:47:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:58.323 14:47:43 -- common/autotest_common.sh@10 -- # set +x 00:06:58.323 14:47:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:58.323 14:47:43 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:58.323 14:47:43 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:58.323 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.338 Initializing NVMe Controllers 00:07:08.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:08.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:08.338 Initialization complete. Launching workers. 00:07:08.338 ======================================================== 00:07:08.338 Latency(us) 00:07:08.338 Device Information : IOPS MiB/s Average min max 00:07:08.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14749.29 57.61 4338.70 585.72 15272.14 00:07:08.338 ======================================================== 00:07:08.338 Total : 14749.29 57.61 4338.70 585.72 15272.14 00:07:08.338 00:07:08.595 14:47:54 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:08.595 14:47:54 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:08.595 14:47:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:08.595 14:47:54 -- nvmf/common.sh@117 -- # sync 00:07:08.595 14:47:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:08.595 14:47:54 -- nvmf/common.sh@120 -- # set +e 00:07:08.595 14:47:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:08.595 14:47:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:08.595 rmmod nvme_tcp 00:07:08.595 rmmod nvme_fabrics 00:07:08.595 rmmod nvme_keyring 00:07:08.595 14:47:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:08.595 14:47:54 -- nvmf/common.sh@124 -- # set -e 00:07:08.595 14:47:54 -- nvmf/common.sh@125 -- # return 0 00:07:08.595 14:47:54 -- nvmf/common.sh@478 -- # '[' -n 3666007 ']' 00:07:08.595 14:47:54 -- nvmf/common.sh@479 -- # killprocess 3666007 00:07:08.595 14:47:54 -- common/autotest_common.sh@936 -- # '[' -z 3666007 ']' 00:07:08.595 14:47:54 -- common/autotest_common.sh@940 -- # kill -0 3666007 00:07:08.595 14:47:54 -- common/autotest_common.sh@941 -- # uname 00:07:08.595 14:47:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:08.595 14:47:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3666007 00:07:08.595 14:47:54 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:08.595 14:47:54 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:08.595 14:47:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3666007' 00:07:08.595 killing process with pid 3666007 00:07:08.595 14:47:54 -- common/autotest_common.sh@955 -- # kill 3666007 00:07:08.595 14:47:54 -- common/autotest_common.sh@960 -- # wait 3666007 00:07:08.853 nvmf threads initialize successfully 00:07:08.853 bdev subsystem init successfully 00:07:08.853 created a nvmf target service 00:07:08.853 create targets's poll groups done 00:07:08.853 all subsystems of target started 00:07:08.853 nvmf target is running 00:07:08.853 all subsystems of target stopped 00:07:08.853 destroy targets's poll groups done 00:07:08.853 destroyed the nvmf target service 00:07:08.853 bdev subsystem finish successfully 00:07:08.853 nvmf threads destroy successfully 00:07:08.853 14:47:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:08.853 14:47:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:08.853 14:47:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:08.853 14:47:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:08.853 14:47:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:08.853 14:47:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.853 14:47:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.853 14:47:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.752 14:47:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:10.752 14:47:56 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:10.752 14:47:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:10.753 14:47:56 -- common/autotest_common.sh@10 -- # set +x 00:07:10.753 00:07:10.753 real 0m15.231s 00:07:10.753 user 0m42.114s 00:07:10.753 sys 0m3.585s 00:07:10.753 14:47:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:10.753 14:47:56 -- common/autotest_common.sh@10 -- # set +x 00:07:10.753 ************************************ 00:07:10.753 END TEST nvmf_example 00:07:10.753 ************************************ 00:07:10.753 14:47:56 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:10.753 14:47:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:10.753 14:47:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.753 14:47:56 -- common/autotest_common.sh@10 -- # set +x 00:07:11.037 ************************************ 00:07:11.037 START TEST nvmf_filesystem 00:07:11.037 ************************************ 00:07:11.037 14:47:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:11.037 * Looking for test storage... 00:07:11.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.037 14:47:56 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:11.037 14:47:56 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:11.037 14:47:56 -- common/autotest_common.sh@34 -- # set -e 00:07:11.037 14:47:56 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:11.037 14:47:56 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:11.037 14:47:56 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:11.037 14:47:56 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:11.037 14:47:56 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:11.037 14:47:56 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:11.037 14:47:56 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:11.037 14:47:56 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:11.037 14:47:56 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:11.037 14:47:56 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:11.037 14:47:56 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:11.037 14:47:56 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:11.037 14:47:56 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:11.037 14:47:56 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:11.037 14:47:56 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:11.037 14:47:56 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:11.037 14:47:56 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:11.037 14:47:56 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:11.037 14:47:56 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:11.037 14:47:56 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:11.037 14:47:56 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:11.037 14:47:56 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:11.037 14:47:56 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:11.037 14:47:56 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:11.037 14:47:56 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:11.037 14:47:56 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:11.037 14:47:56 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:11.037 14:47:56 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:11.037 14:47:56 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:11.037 14:47:56 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:11.037 14:47:56 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:11.037 14:47:56 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:11.037 14:47:56 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:11.037 14:47:56 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:11.037 14:47:56 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:11.037 14:47:56 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:11.037 14:47:56 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:11.037 14:47:56 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:11.037 14:47:56 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:11.037 14:47:56 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:11.037 14:47:56 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:11.037 14:47:56 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:11.037 14:47:56 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:11.037 14:47:56 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:11.037 14:47:56 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:11.037 14:47:56 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:11.037 14:47:56 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:11.037 14:47:56 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:11.037 14:47:56 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:11.037 14:47:56 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:11.037 14:47:56 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:11.037 14:47:56 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:11.037 14:47:56 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:11.037 14:47:56 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:11.037 14:47:56 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:11.037 14:47:56 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:11.037 14:47:56 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:11.037 14:47:56 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:07:11.037 14:47:56 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:07:11.037 14:47:56 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:07:11.037 14:47:56 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:07:11.037 14:47:56 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:07:11.037 14:47:56 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:07:11.037 14:47:56 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:07:11.037 14:47:56 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:07:11.037 14:47:56 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:07:11.037 14:47:56 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:11.037 14:47:56 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:07:11.037 14:47:56 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:07:11.037 14:47:56 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:07:11.037 14:47:56 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:07:11.037 14:47:56 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:07:11.037 14:47:56 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:11.037 14:47:56 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:07:11.037 14:47:56 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:07:11.037 14:47:56 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:07:11.037 14:47:56 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:07:11.037 14:47:56 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:07:11.037 14:47:56 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:07:11.037 14:47:56 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:07:11.037 14:47:56 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:07:11.038 14:47:56 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:07:11.038 14:47:56 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:07:11.038 14:47:56 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:07:11.038 14:47:56 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:11.038 14:47:56 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:07:11.038 14:47:56 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:07:11.038 14:47:56 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:11.038 14:47:56 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:11.038 14:47:56 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:11.038 14:47:56 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:11.038 14:47:56 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:11.038 14:47:56 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:11.038 14:47:56 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:11.038 14:47:56 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:11.038 14:47:56 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:11.038 14:47:56 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:11.038 14:47:56 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:11.038 14:47:56 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:11.038 14:47:56 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:11.038 14:47:56 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:11.038 14:47:56 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:11.038 14:47:56 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:11.038 #define SPDK_CONFIG_H 00:07:11.038 #define SPDK_CONFIG_APPS 1 00:07:11.038 #define SPDK_CONFIG_ARCH native 00:07:11.038 #undef SPDK_CONFIG_ASAN 00:07:11.038 #undef SPDK_CONFIG_AVAHI 00:07:11.038 #undef SPDK_CONFIG_CET 00:07:11.038 #define SPDK_CONFIG_COVERAGE 1 00:07:11.038 #define SPDK_CONFIG_CROSS_PREFIX 00:07:11.038 #undef SPDK_CONFIG_CRYPTO 00:07:11.038 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:11.038 #undef SPDK_CONFIG_CUSTOMOCF 00:07:11.038 #undef SPDK_CONFIG_DAOS 00:07:11.038 #define SPDK_CONFIG_DAOS_DIR 00:07:11.038 #define SPDK_CONFIG_DEBUG 1 00:07:11.038 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:11.038 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:11.038 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:11.038 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:11.038 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:11.038 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:11.038 #define SPDK_CONFIG_EXAMPLES 1 00:07:11.038 #undef SPDK_CONFIG_FC 00:07:11.038 #define SPDK_CONFIG_FC_PATH 00:07:11.038 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:11.038 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:11.038 #undef SPDK_CONFIG_FUSE 00:07:11.038 #undef SPDK_CONFIG_FUZZER 00:07:11.038 #define SPDK_CONFIG_FUZZER_LIB 00:07:11.038 #undef SPDK_CONFIG_GOLANG 00:07:11.038 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:11.038 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:11.038 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:11.038 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:11.038 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:11.038 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:11.038 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:11.038 #define SPDK_CONFIG_IDXD 1 00:07:11.038 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:11.038 #undef SPDK_CONFIG_IPSEC_MB 00:07:11.038 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:11.038 #define SPDK_CONFIG_ISAL 1 00:07:11.038 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:11.038 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:11.038 #define SPDK_CONFIG_LIBDIR 00:07:11.038 #undef SPDK_CONFIG_LTO 00:07:11.038 #define SPDK_CONFIG_MAX_LCORES 00:07:11.038 #define SPDK_CONFIG_NVME_CUSE 1 00:07:11.038 #undef SPDK_CONFIG_OCF 00:07:11.038 #define SPDK_CONFIG_OCF_PATH 00:07:11.038 #define SPDK_CONFIG_OPENSSL_PATH 00:07:11.038 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:11.038 #define SPDK_CONFIG_PGO_DIR 00:07:11.038 #undef SPDK_CONFIG_PGO_USE 00:07:11.038 #define SPDK_CONFIG_PREFIX /usr/local 00:07:11.038 #undef SPDK_CONFIG_RAID5F 00:07:11.038 #undef SPDK_CONFIG_RBD 00:07:11.038 #define SPDK_CONFIG_RDMA 1 00:07:11.038 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:11.038 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:11.038 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:11.038 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:11.038 #define SPDK_CONFIG_SHARED 1 00:07:11.038 #undef SPDK_CONFIG_SMA 00:07:11.038 #define SPDK_CONFIG_TESTS 1 00:07:11.038 #undef SPDK_CONFIG_TSAN 00:07:11.038 #define SPDK_CONFIG_UBLK 1 00:07:11.038 #define SPDK_CONFIG_UBSAN 1 00:07:11.038 #undef SPDK_CONFIG_UNIT_TESTS 00:07:11.038 #undef SPDK_CONFIG_URING 00:07:11.038 #define SPDK_CONFIG_URING_PATH 00:07:11.038 #undef SPDK_CONFIG_URING_ZNS 00:07:11.038 #undef SPDK_CONFIG_USDT 00:07:11.038 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:11.038 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:11.038 #define SPDK_CONFIG_VFIO_USER 1 00:07:11.038 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:11.038 #define SPDK_CONFIG_VHOST 1 00:07:11.038 #define SPDK_CONFIG_VIRTIO 1 00:07:11.038 #undef SPDK_CONFIG_VTUNE 00:07:11.038 #define SPDK_CONFIG_VTUNE_DIR 00:07:11.038 #define SPDK_CONFIG_WERROR 1 00:07:11.038 #define SPDK_CONFIG_WPDK_DIR 00:07:11.038 #undef SPDK_CONFIG_XNVME 00:07:11.038 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:11.038 14:47:56 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:11.038 14:47:56 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.038 14:47:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.038 14:47:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.038 14:47:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.038 14:47:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.038 14:47:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.038 14:47:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.038 14:47:56 -- paths/export.sh@5 -- # export PATH 00:07:11.038 14:47:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.038 14:47:56 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:11.038 14:47:56 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:11.038 14:47:56 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:11.038 14:47:56 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:11.038 14:47:56 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:11.038 14:47:56 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:11.038 14:47:56 -- pm/common@67 -- # TEST_TAG=N/A 00:07:11.038 14:47:56 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:11.038 14:47:56 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:11.038 14:47:56 -- pm/common@71 -- # uname -s 00:07:11.038 14:47:56 -- pm/common@71 -- # PM_OS=Linux 00:07:11.038 14:47:56 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:11.038 14:47:56 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:07:11.038 14:47:56 -- pm/common@76 -- # [[ Linux == Linux ]] 00:07:11.038 14:47:56 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:07:11.038 14:47:56 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:07:11.038 14:47:56 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:11.038 14:47:56 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:11.038 14:47:56 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:07:11.038 14:47:56 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:07:11.038 14:47:56 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:11.038 14:47:56 -- common/autotest_common.sh@57 -- # : 1 00:07:11.038 14:47:56 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:11.038 14:47:56 -- common/autotest_common.sh@61 -- # : 0 00:07:11.038 14:47:56 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:11.038 14:47:56 -- common/autotest_common.sh@63 -- # : 0 00:07:11.038 14:47:56 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:11.038 14:47:56 -- common/autotest_common.sh@65 -- # : 1 00:07:11.038 14:47:56 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:11.038 14:47:56 -- common/autotest_common.sh@67 -- # : 0 00:07:11.038 14:47:56 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:11.038 14:47:56 -- common/autotest_common.sh@69 -- # : 00:07:11.038 14:47:56 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:11.038 14:47:56 -- common/autotest_common.sh@71 -- # : 0 00:07:11.038 14:47:56 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:11.038 14:47:56 -- common/autotest_common.sh@73 -- # : 0 00:07:11.038 14:47:56 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:11.038 14:47:56 -- common/autotest_common.sh@75 -- # : 0 00:07:11.038 14:47:56 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:11.038 14:47:56 -- common/autotest_common.sh@77 -- # : 0 00:07:11.038 14:47:56 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:11.038 14:47:56 -- common/autotest_common.sh@79 -- # : 0 00:07:11.038 14:47:56 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:11.038 14:47:56 -- common/autotest_common.sh@81 -- # : 0 00:07:11.038 14:47:56 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:11.038 14:47:56 -- common/autotest_common.sh@83 -- # : 0 00:07:11.038 14:47:56 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:11.038 14:47:56 -- common/autotest_common.sh@85 -- # : 1 00:07:11.038 14:47:56 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:11.038 14:47:56 -- common/autotest_common.sh@87 -- # : 0 00:07:11.038 14:47:56 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:11.038 14:47:56 -- common/autotest_common.sh@89 -- # : 0 00:07:11.038 14:47:56 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:11.038 14:47:56 -- common/autotest_common.sh@91 -- # : 1 00:07:11.038 14:47:56 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:11.038 14:47:56 -- common/autotest_common.sh@93 -- # : 1 00:07:11.039 14:47:56 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:11.039 14:47:56 -- common/autotest_common.sh@95 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:11.039 14:47:56 -- common/autotest_common.sh@97 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:11.039 14:47:56 -- common/autotest_common.sh@99 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:11.039 14:47:56 -- common/autotest_common.sh@101 -- # : tcp 00:07:11.039 14:47:56 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:11.039 14:47:56 -- common/autotest_common.sh@103 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:11.039 14:47:56 -- common/autotest_common.sh@105 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:11.039 14:47:56 -- common/autotest_common.sh@107 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:11.039 14:47:56 -- common/autotest_common.sh@109 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:11.039 14:47:56 -- common/autotest_common.sh@111 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:11.039 14:47:56 -- common/autotest_common.sh@113 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:11.039 14:47:56 -- common/autotest_common.sh@115 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:11.039 14:47:56 -- common/autotest_common.sh@117 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:11.039 14:47:56 -- common/autotest_common.sh@119 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:11.039 14:47:56 -- common/autotest_common.sh@121 -- # : 1 00:07:11.039 14:47:56 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:11.039 14:47:56 -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:11.039 14:47:56 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:11.039 14:47:56 -- common/autotest_common.sh@125 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:11.039 14:47:56 -- common/autotest_common.sh@127 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:11.039 14:47:56 -- common/autotest_common.sh@129 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:11.039 14:47:56 -- common/autotest_common.sh@131 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:11.039 14:47:56 -- common/autotest_common.sh@133 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:11.039 14:47:56 -- common/autotest_common.sh@135 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:11.039 14:47:56 -- common/autotest_common.sh@137 -- # : main 00:07:11.039 14:47:56 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:11.039 14:47:56 -- common/autotest_common.sh@139 -- # : true 00:07:11.039 14:47:56 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:11.039 14:47:56 -- common/autotest_common.sh@141 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:11.039 14:47:56 -- common/autotest_common.sh@143 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:11.039 14:47:56 -- common/autotest_common.sh@145 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:11.039 14:47:56 -- common/autotest_common.sh@147 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:11.039 14:47:56 -- common/autotest_common.sh@149 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:11.039 14:47:56 -- common/autotest_common.sh@151 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:11.039 14:47:56 -- common/autotest_common.sh@153 -- # : e810 00:07:11.039 14:47:56 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:11.039 14:47:56 -- common/autotest_common.sh@155 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:11.039 14:47:56 -- common/autotest_common.sh@157 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:11.039 14:47:56 -- common/autotest_common.sh@159 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:11.039 14:47:56 -- common/autotest_common.sh@161 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:11.039 14:47:56 -- common/autotest_common.sh@163 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:11.039 14:47:56 -- common/autotest_common.sh@166 -- # : 00:07:11.039 14:47:56 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:11.039 14:47:56 -- common/autotest_common.sh@168 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:11.039 14:47:56 -- common/autotest_common.sh@170 -- # : 0 00:07:11.039 14:47:56 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:11.039 14:47:56 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:11.039 14:47:56 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:11.039 14:47:56 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:11.039 14:47:56 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:11.039 14:47:56 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.039 14:47:56 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.039 14:47:56 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.039 14:47:56 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.039 14:47:56 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:11.039 14:47:56 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:11.039 14:47:56 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:11.039 14:47:56 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:11.039 14:47:56 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:11.039 14:47:56 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:11.039 14:47:56 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:11.039 14:47:56 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:11.039 14:47:56 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:11.039 14:47:56 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:11.039 14:47:56 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:11.039 14:47:56 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:11.039 14:47:56 -- common/autotest_common.sh@199 -- # cat 00:07:11.039 14:47:56 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:07:11.039 14:47:56 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:11.039 14:47:56 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:11.039 14:47:56 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:11.039 14:47:56 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:11.039 14:47:56 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:07:11.039 14:47:56 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:07:11.039 14:47:56 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:11.039 14:47:56 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:11.039 14:47:56 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:11.039 14:47:56 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:11.039 14:47:56 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:11.039 14:47:56 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:11.039 14:47:56 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:11.039 14:47:56 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:11.039 14:47:56 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:11.039 14:47:56 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:11.039 14:47:56 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:11.039 14:47:56 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:11.039 14:47:56 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:07:11.039 14:47:56 -- common/autotest_common.sh@252 -- # export valgrind= 00:07:11.039 14:47:56 -- common/autotest_common.sh@252 -- # valgrind= 00:07:11.039 14:47:56 -- common/autotest_common.sh@258 -- # uname -s 00:07:11.039 14:47:56 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:07:11.039 14:47:56 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:07:11.039 14:47:56 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:07:11.039 14:47:56 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:07:11.039 14:47:56 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:11.039 14:47:56 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:11.039 14:47:56 -- common/autotest_common.sh@268 -- # MAKE=make 00:07:11.039 14:47:56 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j48 00:07:11.039 14:47:56 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:07:11.039 14:47:56 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:07:11.039 14:47:56 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:07:11.039 14:47:56 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:07:11.039 14:47:56 -- common/autotest_common.sh@289 -- # for i in "$@" 00:07:11.040 14:47:56 -- common/autotest_common.sh@290 -- # case "$i" in 00:07:11.040 14:47:56 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:07:11.040 14:47:56 -- common/autotest_common.sh@307 -- # [[ -z 3667721 ]] 00:07:11.040 14:47:56 -- common/autotest_common.sh@307 -- # kill -0 3667721 00:07:11.040 14:47:56 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:07:11.040 14:47:56 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:07:11.040 14:47:56 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:07:11.040 14:47:56 -- common/autotest_common.sh@320 -- # local mount target_dir 00:07:11.040 14:47:56 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:07:11.040 14:47:56 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:07:11.040 14:47:56 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:07:11.040 14:47:56 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:07:11.040 14:47:56 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.PZt54o 00:07:11.040 14:47:56 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:11.040 14:47:56 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:07:11.040 14:47:56 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:07:11.040 14:47:56 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.PZt54o/tests/target /tmp/spdk.PZt54o 00:07:11.040 14:47:56 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:07:11.040 14:47:56 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:11.040 14:47:56 -- common/autotest_common.sh@316 -- # df -T 00:07:11.040 14:47:56 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:07:11.040 14:47:56 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:07:11.040 14:47:56 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:07:11.040 14:47:56 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:07:11.040 14:47:56 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:07:11.040 14:47:56 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:07:11.040 14:47:56 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:11.040 14:47:56 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:07:11.040 14:47:56 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:07:11.040 14:47:56 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:07:11.040 14:47:56 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:07:11.040 14:47:56 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:07:11.040 14:47:56 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:11.040 14:47:56 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:07:11.040 14:47:56 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:07:11.040 14:47:56 -- common/autotest_common.sh@351 -- # avails["$mount"]=35804598272 00:07:11.040 14:47:56 -- common/autotest_common.sh@351 -- # sizes["$mount"]=45083295744 00:07:11.040 14:47:56 -- common/autotest_common.sh@352 -- # uses["$mount"]=9278697472 00:07:11.040 14:47:56 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:11.040 14:47:56 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:11.040 14:47:56 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:11.040 14:47:56 -- common/autotest_common.sh@351 -- # avails["$mount"]=22540369920 00:07:11.040 14:47:56 -- common/autotest_common.sh@351 -- # sizes["$mount"]=22541647872 00:07:11.040 14:47:56 -- common/autotest_common.sh@352 -- # uses["$mount"]=1277952 00:07:11.040 14:47:56 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:11.040 14:47:56 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:11.040 14:47:56 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:11.040 14:47:56 -- common/autotest_common.sh@351 -- # avails["$mount"]=9007874048 00:07:11.040 14:47:56 -- common/autotest_common.sh@351 -- # sizes["$mount"]=9016659968 00:07:11.040 14:47:56 -- common/autotest_common.sh@352 -- # uses["$mount"]=8785920 00:07:11.040 14:47:56 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:11.040 14:47:56 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:11.040 14:47:56 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:11.040 14:47:56 -- common/autotest_common.sh@351 -- # avails["$mount"]=22540922880 00:07:11.040 14:47:56 -- common/autotest_common.sh@351 -- # sizes["$mount"]=22541647872 00:07:11.040 14:47:56 -- common/autotest_common.sh@352 -- # uses["$mount"]=724992 00:07:11.040 14:47:56 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:11.040 14:47:56 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:11.040 14:47:56 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:11.040 14:47:56 -- common/autotest_common.sh@351 -- # avails["$mount"]=4508323840 00:07:11.040 14:47:56 -- common/autotest_common.sh@351 -- # sizes["$mount"]=4508327936 00:07:11.040 14:47:56 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:07:11.040 14:47:56 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:11.040 14:47:56 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:07:11.040 * Looking for test storage... 00:07:11.040 14:47:56 -- common/autotest_common.sh@357 -- # local target_space new_size 00:07:11.040 14:47:56 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:07:11.040 14:47:56 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.040 14:47:56 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:11.040 14:47:56 -- common/autotest_common.sh@361 -- # mount=/ 00:07:11.040 14:47:56 -- common/autotest_common.sh@363 -- # target_space=35804598272 00:07:11.040 14:47:56 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:07:11.040 14:47:56 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:07:11.040 14:47:56 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:07:11.040 14:47:56 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:07:11.040 14:47:56 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:07:11.040 14:47:56 -- common/autotest_common.sh@370 -- # new_size=11493289984 00:07:11.040 14:47:56 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:11.040 14:47:56 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.040 14:47:56 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.040 14:47:56 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.040 14:47:56 -- common/autotest_common.sh@378 -- # return 0 00:07:11.040 14:47:56 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:07:11.040 14:47:56 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:07:11.040 14:47:56 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:11.040 14:47:56 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:11.040 14:47:56 -- common/autotest_common.sh@1673 -- # true 00:07:11.040 14:47:56 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:07:11.040 14:47:56 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:11.040 14:47:56 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:11.040 14:47:56 -- common/autotest_common.sh@27 -- # exec 00:07:11.040 14:47:56 -- common/autotest_common.sh@29 -- # exec 00:07:11.040 14:47:56 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:11.040 14:47:56 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:11.040 14:47:56 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:11.040 14:47:56 -- common/autotest_common.sh@18 -- # set -x 00:07:11.040 14:47:56 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.040 14:47:56 -- nvmf/common.sh@7 -- # uname -s 00:07:11.040 14:47:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.040 14:47:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.040 14:47:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.040 14:47:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.040 14:47:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.040 14:47:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.040 14:47:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.040 14:47:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.040 14:47:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.040 14:47:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.040 14:47:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:11.040 14:47:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:11.040 14:47:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.040 14:47:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.040 14:47:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.040 14:47:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.040 14:47:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.040 14:47:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.040 14:47:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.040 14:47:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.040 14:47:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.040 14:47:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.040 14:47:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.040 14:47:56 -- paths/export.sh@5 -- # export PATH 00:07:11.040 14:47:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.040 14:47:56 -- nvmf/common.sh@47 -- # : 0 00:07:11.040 14:47:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:11.040 14:47:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:11.040 14:47:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.040 14:47:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.040 14:47:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.040 14:47:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:11.040 14:47:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:11.040 14:47:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:11.040 14:47:56 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:11.040 14:47:56 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:11.040 14:47:56 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:11.040 14:47:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:11.040 14:47:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.040 14:47:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:11.040 14:47:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:11.040 14:47:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:11.040 14:47:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.041 14:47:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.041 14:47:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.041 14:47:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:11.041 14:47:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:11.041 14:47:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:11.041 14:47:56 -- common/autotest_common.sh@10 -- # set +x 00:07:12.938 14:47:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:12.938 14:47:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:12.938 14:47:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:12.938 14:47:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:12.938 14:47:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:12.938 14:47:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:12.938 14:47:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:12.938 14:47:58 -- nvmf/common.sh@295 -- # net_devs=() 00:07:12.938 14:47:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:12.938 14:47:58 -- nvmf/common.sh@296 -- # e810=() 00:07:12.938 14:47:58 -- nvmf/common.sh@296 -- # local -ga e810 00:07:12.938 14:47:58 -- nvmf/common.sh@297 -- # x722=() 00:07:12.938 14:47:58 -- nvmf/common.sh@297 -- # local -ga x722 00:07:12.938 14:47:58 -- nvmf/common.sh@298 -- # mlx=() 00:07:12.938 14:47:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:12.938 14:47:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:12.938 14:47:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:12.938 14:47:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:12.938 14:47:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:12.938 14:47:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:12.938 14:47:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:12.938 14:47:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:12.938 14:47:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:12.938 14:47:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:12.938 14:47:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:12.938 14:47:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:12.938 14:47:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:12.938 14:47:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:12.938 14:47:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:12.938 14:47:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:12.938 14:47:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:12.938 14:47:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:12.938 14:47:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.196 14:47:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:13.196 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:13.196 14:47:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.196 14:47:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.196 14:47:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.196 14:47:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.196 14:47:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.196 14:47:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.196 14:47:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:13.196 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:13.196 14:47:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.196 14:47:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.196 14:47:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.196 14:47:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.196 14:47:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.196 14:47:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:13.196 14:47:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:13.196 14:47:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:13.196 14:47:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.196 14:47:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.196 14:47:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:13.196 14:47:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.196 14:47:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:13.196 Found net devices under 0000:84:00.0: cvl_0_0 00:07:13.196 14:47:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.196 14:47:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.196 14:47:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.196 14:47:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:13.196 14:47:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.196 14:47:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:13.196 Found net devices under 0000:84:00.1: cvl_0_1 00:07:13.196 14:47:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.196 14:47:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:13.196 14:47:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:13.196 14:47:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:13.196 14:47:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:13.196 14:47:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:13.196 14:47:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.196 14:47:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.196 14:47:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.196 14:47:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:13.196 14:47:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.196 14:47:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.196 14:47:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:13.196 14:47:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.196 14:47:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.196 14:47:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:13.196 14:47:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:13.196 14:47:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.196 14:47:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.196 14:47:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.196 14:47:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.196 14:47:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:13.196 14:47:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.196 14:47:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.196 14:47:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.196 14:47:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:13.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:07:13.196 00:07:13.196 --- 10.0.0.2 ping statistics --- 00:07:13.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.196 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:07:13.196 14:47:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:07:13.196 00:07:13.196 --- 10.0.0.1 ping statistics --- 00:07:13.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.196 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:07:13.196 14:47:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.196 14:47:58 -- nvmf/common.sh@411 -- # return 0 00:07:13.196 14:47:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:13.196 14:47:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.196 14:47:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:13.196 14:47:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:13.196 14:47:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.196 14:47:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:13.196 14:47:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:13.196 14:47:58 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:13.196 14:47:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:13.196 14:47:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.196 14:47:58 -- common/autotest_common.sh@10 -- # set +x 00:07:13.196 ************************************ 00:07:13.196 START TEST nvmf_filesystem_no_in_capsule 00:07:13.196 ************************************ 00:07:13.196 14:47:58 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:07:13.196 14:47:58 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:13.196 14:47:58 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:13.196 14:47:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:13.196 14:47:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:13.196 14:47:58 -- common/autotest_common.sh@10 -- # set +x 00:07:13.196 14:47:58 -- nvmf/common.sh@470 -- # nvmfpid=3669368 00:07:13.196 14:47:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:13.196 14:47:58 -- nvmf/common.sh@471 -- # waitforlisten 3669368 00:07:13.196 14:47:58 -- common/autotest_common.sh@817 -- # '[' -z 3669368 ']' 00:07:13.196 14:47:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.196 14:47:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:13.196 14:47:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.196 14:47:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:13.196 14:47:58 -- common/autotest_common.sh@10 -- # set +x 00:07:13.454 [2024-04-26 14:47:58.971331] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:07:13.454 [2024-04-26 14:47:58.971411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.454 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.454 [2024-04-26 14:47:59.014918] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:13.454 [2024-04-26 14:47:59.045654] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.454 [2024-04-26 14:47:59.137830] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.454 [2024-04-26 14:47:59.137880] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.454 [2024-04-26 14:47:59.137897] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.454 [2024-04-26 14:47:59.137911] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.454 [2024-04-26 14:47:59.137923] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.454 [2024-04-26 14:47:59.137980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.454 [2024-04-26 14:47:59.138046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.454 [2024-04-26 14:47:59.138088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.454 [2024-04-26 14:47:59.138091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.712 14:47:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:13.712 14:47:59 -- common/autotest_common.sh@850 -- # return 0 00:07:13.712 14:47:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:13.712 14:47:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:13.712 14:47:59 -- common/autotest_common.sh@10 -- # set +x 00:07:13.712 14:47:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.712 14:47:59 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:13.712 14:47:59 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:13.712 14:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.712 14:47:59 -- common/autotest_common.sh@10 -- # set +x 00:07:13.712 [2024-04-26 14:47:59.293738] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.712 14:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.712 14:47:59 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:13.712 14:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.712 14:47:59 -- common/autotest_common.sh@10 -- # set +x 00:07:13.969 Malloc1 00:07:13.969 14:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.969 14:47:59 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:13.969 14:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.969 14:47:59 -- common/autotest_common.sh@10 -- # set +x 00:07:13.969 14:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.969 14:47:59 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.969 14:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.969 14:47:59 -- common/autotest_common.sh@10 -- # set +x 00:07:13.970 14:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.970 14:47:59 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.970 14:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.970 14:47:59 -- common/autotest_common.sh@10 -- # set +x 00:07:13.970 [2024-04-26 14:47:59.478982] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.970 14:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.970 14:47:59 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:13.970 14:47:59 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:13.970 14:47:59 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:13.970 14:47:59 -- common/autotest_common.sh@1366 -- # local bs 00:07:13.970 14:47:59 -- common/autotest_common.sh@1367 -- # local nb 00:07:13.970 14:47:59 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:13.970 14:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.970 14:47:59 -- common/autotest_common.sh@10 -- # set +x 00:07:13.970 14:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.970 14:47:59 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:13.970 { 00:07:13.970 "name": "Malloc1", 00:07:13.970 "aliases": [ 00:07:13.970 "1323afcd-0354-408d-8774-c164ca4add99" 00:07:13.970 ], 00:07:13.970 "product_name": "Malloc disk", 00:07:13.970 "block_size": 512, 00:07:13.970 "num_blocks": 1048576, 00:07:13.970 "uuid": "1323afcd-0354-408d-8774-c164ca4add99", 00:07:13.970 "assigned_rate_limits": { 00:07:13.970 "rw_ios_per_sec": 0, 00:07:13.970 "rw_mbytes_per_sec": 0, 00:07:13.970 "r_mbytes_per_sec": 0, 00:07:13.970 "w_mbytes_per_sec": 0 00:07:13.970 }, 00:07:13.970 "claimed": true, 00:07:13.970 "claim_type": "exclusive_write", 00:07:13.970 "zoned": false, 00:07:13.970 "supported_io_types": { 00:07:13.970 "read": true, 00:07:13.970 "write": true, 00:07:13.970 "unmap": true, 00:07:13.970 "write_zeroes": true, 00:07:13.970 "flush": true, 00:07:13.970 "reset": true, 00:07:13.970 "compare": false, 00:07:13.970 "compare_and_write": false, 00:07:13.970 "abort": true, 00:07:13.970 "nvme_admin": false, 00:07:13.970 "nvme_io": false 00:07:13.970 }, 00:07:13.970 "memory_domains": [ 00:07:13.970 { 00:07:13.970 "dma_device_id": "system", 00:07:13.970 "dma_device_type": 1 00:07:13.970 }, 00:07:13.970 { 00:07:13.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.970 "dma_device_type": 2 00:07:13.970 } 00:07:13.970 ], 00:07:13.970 "driver_specific": {} 00:07:13.970 } 00:07:13.970 ]' 00:07:13.970 14:47:59 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:13.970 14:47:59 -- common/autotest_common.sh@1369 -- # bs=512 00:07:13.970 14:47:59 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:13.970 14:47:59 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:13.970 14:47:59 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:13.970 14:47:59 -- common/autotest_common.sh@1374 -- # echo 512 00:07:13.970 14:47:59 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:13.970 14:47:59 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:14.535 14:48:00 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:14.535 14:48:00 -- common/autotest_common.sh@1184 -- # local i=0 00:07:14.535 14:48:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:14.535 14:48:00 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:14.535 14:48:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:17.061 14:48:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:17.061 14:48:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:17.061 14:48:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:17.061 14:48:02 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:17.061 14:48:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:17.061 14:48:02 -- common/autotest_common.sh@1194 -- # return 0 00:07:17.061 14:48:02 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:17.061 14:48:02 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:17.061 14:48:02 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:17.061 14:48:02 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:17.061 14:48:02 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:17.061 14:48:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:17.061 14:48:02 -- setup/common.sh@80 -- # echo 536870912 00:07:17.061 14:48:02 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:17.061 14:48:02 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:17.061 14:48:02 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:17.061 14:48:02 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:17.061 14:48:02 -- target/filesystem.sh@69 -- # partprobe 00:07:17.061 14:48:02 -- target/filesystem.sh@70 -- # sleep 1 00:07:17.994 14:48:03 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:17.994 14:48:03 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:17.994 14:48:03 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:17.994 14:48:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.994 14:48:03 -- common/autotest_common.sh@10 -- # set +x 00:07:18.253 ************************************ 00:07:18.253 START TEST filesystem_ext4 00:07:18.253 ************************************ 00:07:18.253 14:48:03 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:18.253 14:48:03 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:18.253 14:48:03 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:18.253 14:48:03 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:18.253 14:48:03 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:18.253 14:48:03 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:18.253 14:48:03 -- common/autotest_common.sh@914 -- # local i=0 00:07:18.253 14:48:03 -- common/autotest_common.sh@915 -- # local force 00:07:18.253 14:48:03 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:18.253 14:48:03 -- common/autotest_common.sh@918 -- # force=-F 00:07:18.253 14:48:03 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:18.253 mke2fs 1.46.5 (30-Dec-2021) 00:07:18.253 Discarding device blocks: 0/522240 done 00:07:18.253 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:18.253 Filesystem UUID: 9ba9c948-77e2-4fdf-915b-5e9344e57a0e 00:07:18.253 Superblock backups stored on blocks: 00:07:18.253 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:18.253 00:07:18.253 Allocating group tables: 0/64 done 00:07:18.253 Writing inode tables: 0/64 done 00:07:18.818 Creating journal (8192 blocks): done 00:07:19.640 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:07:19.640 00:07:19.640 14:48:05 -- common/autotest_common.sh@931 -- # return 0 00:07:19.640 14:48:05 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:20.572 14:48:06 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:20.572 14:48:06 -- target/filesystem.sh@25 -- # sync 00:07:20.572 14:48:06 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:20.572 14:48:06 -- target/filesystem.sh@27 -- # sync 00:07:20.572 14:48:06 -- target/filesystem.sh@29 -- # i=0 00:07:20.572 14:48:06 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:20.572 14:48:06 -- target/filesystem.sh@37 -- # kill -0 3669368 00:07:20.572 14:48:06 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:20.572 14:48:06 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:20.572 14:48:06 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:20.572 14:48:06 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:20.572 00:07:20.572 real 0m2.352s 00:07:20.572 user 0m0.017s 00:07:20.572 sys 0m0.031s 00:07:20.572 14:48:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:20.572 14:48:06 -- common/autotest_common.sh@10 -- # set +x 00:07:20.572 ************************************ 00:07:20.572 END TEST filesystem_ext4 00:07:20.572 ************************************ 00:07:20.572 14:48:06 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:20.572 14:48:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:20.572 14:48:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.572 14:48:06 -- common/autotest_common.sh@10 -- # set +x 00:07:20.572 ************************************ 00:07:20.572 START TEST filesystem_btrfs 00:07:20.572 ************************************ 00:07:20.572 14:48:06 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:20.572 14:48:06 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:20.572 14:48:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:20.572 14:48:06 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:20.572 14:48:06 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:20.572 14:48:06 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:20.572 14:48:06 -- common/autotest_common.sh@914 -- # local i=0 00:07:20.572 14:48:06 -- common/autotest_common.sh@915 -- # local force 00:07:20.572 14:48:06 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:20.572 14:48:06 -- common/autotest_common.sh@920 -- # force=-f 00:07:20.572 14:48:06 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:20.831 btrfs-progs v6.6.2 00:07:20.831 See https://btrfs.readthedocs.io for more information. 00:07:20.831 00:07:20.831 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:20.831 NOTE: several default settings have changed in version 5.15, please make sure 00:07:20.831 this does not affect your deployments: 00:07:20.831 - DUP for metadata (-m dup) 00:07:20.831 - enabled no-holes (-O no-holes) 00:07:20.831 - enabled free-space-tree (-R free-space-tree) 00:07:20.831 00:07:20.831 Label: (null) 00:07:20.831 UUID: 67eb74ba-7623-408a-bd7e-fe38a34ece60 00:07:20.831 Node size: 16384 00:07:20.831 Sector size: 4096 00:07:20.831 Filesystem size: 510.00MiB 00:07:20.831 Block group profiles: 00:07:20.831 Data: single 8.00MiB 00:07:20.831 Metadata: DUP 32.00MiB 00:07:20.831 System: DUP 8.00MiB 00:07:20.831 SSD detected: yes 00:07:20.831 Zoned device: no 00:07:20.831 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:20.831 Runtime features: free-space-tree 00:07:20.831 Checksum: crc32c 00:07:20.831 Number of devices: 1 00:07:20.831 Devices: 00:07:20.831 ID SIZE PATH 00:07:20.831 1 510.00MiB /dev/nvme0n1p1 00:07:20.831 00:07:20.831 14:48:06 -- common/autotest_common.sh@931 -- # return 0 00:07:20.831 14:48:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:21.797 14:48:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:21.797 14:48:07 -- target/filesystem.sh@25 -- # sync 00:07:21.797 14:48:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:21.797 14:48:07 -- target/filesystem.sh@27 -- # sync 00:07:21.797 14:48:07 -- target/filesystem.sh@29 -- # i=0 00:07:21.797 14:48:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:21.797 14:48:07 -- target/filesystem.sh@37 -- # kill -0 3669368 00:07:21.797 14:48:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:21.797 14:48:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:21.797 14:48:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:21.797 14:48:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:21.797 00:07:21.797 real 0m1.078s 00:07:21.797 user 0m0.014s 00:07:21.797 sys 0m0.040s 00:07:21.797 14:48:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:21.797 14:48:07 -- common/autotest_common.sh@10 -- # set +x 00:07:21.797 ************************************ 00:07:21.797 END TEST filesystem_btrfs 00:07:21.797 ************************************ 00:07:21.797 14:48:07 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:21.797 14:48:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:21.797 14:48:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.797 14:48:07 -- common/autotest_common.sh@10 -- # set +x 00:07:21.797 ************************************ 00:07:21.797 START TEST filesystem_xfs 00:07:21.797 ************************************ 00:07:21.797 14:48:07 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:21.797 14:48:07 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:21.797 14:48:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:21.797 14:48:07 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:21.797 14:48:07 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:21.797 14:48:07 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:21.797 14:48:07 -- common/autotest_common.sh@914 -- # local i=0 00:07:21.797 14:48:07 -- common/autotest_common.sh@915 -- # local force 00:07:21.797 14:48:07 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:21.797 14:48:07 -- common/autotest_common.sh@920 -- # force=-f 00:07:21.797 14:48:07 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:22.055 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:22.055 = sectsz=512 attr=2, projid32bit=1 00:07:22.055 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:22.055 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:22.055 data = bsize=4096 blocks=130560, imaxpct=25 00:07:22.055 = sunit=0 swidth=0 blks 00:07:22.055 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:22.055 log =internal log bsize=4096 blocks=16384, version=2 00:07:22.055 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:22.055 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:22.620 Discarding blocks...Done. 00:07:22.620 14:48:08 -- common/autotest_common.sh@931 -- # return 0 00:07:22.620 14:48:08 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:24.517 14:48:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:24.517 14:48:10 -- target/filesystem.sh@25 -- # sync 00:07:24.517 14:48:10 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:24.517 14:48:10 -- target/filesystem.sh@27 -- # sync 00:07:24.517 14:48:10 -- target/filesystem.sh@29 -- # i=0 00:07:24.517 14:48:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:24.517 14:48:10 -- target/filesystem.sh@37 -- # kill -0 3669368 00:07:24.517 14:48:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:24.517 14:48:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:24.517 14:48:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:24.517 14:48:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:24.517 00:07:24.517 real 0m2.804s 00:07:24.517 user 0m0.013s 00:07:24.517 sys 0m0.039s 00:07:24.517 14:48:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:24.517 14:48:10 -- common/autotest_common.sh@10 -- # set +x 00:07:24.517 ************************************ 00:07:24.517 END TEST filesystem_xfs 00:07:24.517 ************************************ 00:07:24.776 14:48:10 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:24.776 14:48:10 -- target/filesystem.sh@93 -- # sync 00:07:24.776 14:48:10 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:24.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.776 14:48:10 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:24.776 14:48:10 -- common/autotest_common.sh@1205 -- # local i=0 00:07:24.776 14:48:10 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:24.776 14:48:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.776 14:48:10 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:24.776 14:48:10 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.776 14:48:10 -- common/autotest_common.sh@1217 -- # return 0 00:07:24.776 14:48:10 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:24.776 14:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.776 14:48:10 -- common/autotest_common.sh@10 -- # set +x 00:07:24.776 14:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.776 14:48:10 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:24.776 14:48:10 -- target/filesystem.sh@101 -- # killprocess 3669368 00:07:24.776 14:48:10 -- common/autotest_common.sh@936 -- # '[' -z 3669368 ']' 00:07:24.776 14:48:10 -- common/autotest_common.sh@940 -- # kill -0 3669368 00:07:24.776 14:48:10 -- common/autotest_common.sh@941 -- # uname 00:07:24.776 14:48:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:24.776 14:48:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3669368 00:07:24.776 14:48:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:24.776 14:48:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:24.776 14:48:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3669368' 00:07:24.776 killing process with pid 3669368 00:07:24.776 14:48:10 -- common/autotest_common.sh@955 -- # kill 3669368 00:07:24.776 14:48:10 -- common/autotest_common.sh@960 -- # wait 3669368 00:07:25.343 14:48:10 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:25.343 00:07:25.343 real 0m11.883s 00:07:25.343 user 0m45.562s 00:07:25.343 sys 0m1.824s 00:07:25.343 14:48:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:25.343 14:48:10 -- common/autotest_common.sh@10 -- # set +x 00:07:25.343 ************************************ 00:07:25.343 END TEST nvmf_filesystem_no_in_capsule 00:07:25.343 ************************************ 00:07:25.343 14:48:10 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:25.343 14:48:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:25.343 14:48:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.343 14:48:10 -- common/autotest_common.sh@10 -- # set +x 00:07:25.343 ************************************ 00:07:25.343 START TEST nvmf_filesystem_in_capsule 00:07:25.343 ************************************ 00:07:25.343 14:48:10 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:07:25.343 14:48:10 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:25.343 14:48:10 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:25.343 14:48:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:25.343 14:48:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:25.343 14:48:10 -- common/autotest_common.sh@10 -- # set +x 00:07:25.343 14:48:10 -- nvmf/common.sh@470 -- # nvmfpid=3671641 00:07:25.343 14:48:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:25.343 14:48:10 -- nvmf/common.sh@471 -- # waitforlisten 3671641 00:07:25.343 14:48:10 -- common/autotest_common.sh@817 -- # '[' -z 3671641 ']' 00:07:25.343 14:48:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.343 14:48:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:25.343 14:48:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.343 14:48:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:25.343 14:48:10 -- common/autotest_common.sh@10 -- # set +x 00:07:25.343 [2024-04-26 14:48:10.984108] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:07:25.343 [2024-04-26 14:48:10.984184] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.343 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.343 [2024-04-26 14:48:11.022199] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:25.343 [2024-04-26 14:48:11.054594] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.602 [2024-04-26 14:48:11.144855] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.603 [2024-04-26 14:48:11.144925] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.603 [2024-04-26 14:48:11.144943] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.603 [2024-04-26 14:48:11.144956] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.603 [2024-04-26 14:48:11.144968] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.603 [2024-04-26 14:48:11.145054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.603 [2024-04-26 14:48:11.145103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.603 [2024-04-26 14:48:11.145196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.603 [2024-04-26 14:48:11.145199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.603 14:48:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:25.603 14:48:11 -- common/autotest_common.sh@850 -- # return 0 00:07:25.603 14:48:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:25.603 14:48:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:25.603 14:48:11 -- common/autotest_common.sh@10 -- # set +x 00:07:25.603 14:48:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.603 14:48:11 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:25.603 14:48:11 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:25.603 14:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.603 14:48:11 -- common/autotest_common.sh@10 -- # set +x 00:07:25.603 [2024-04-26 14:48:11.297754] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.603 14:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.603 14:48:11 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:25.603 14:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.603 14:48:11 -- common/autotest_common.sh@10 -- # set +x 00:07:25.861 Malloc1 00:07:25.861 14:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.861 14:48:11 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:25.861 14:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.861 14:48:11 -- common/autotest_common.sh@10 -- # set +x 00:07:25.861 14:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.861 14:48:11 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:25.861 14:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.861 14:48:11 -- common/autotest_common.sh@10 -- # set +x 00:07:25.861 14:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.861 14:48:11 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:25.861 14:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.861 14:48:11 -- common/autotest_common.sh@10 -- # set +x 00:07:25.861 [2024-04-26 14:48:11.486260] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.861 14:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.861 14:48:11 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:25.861 14:48:11 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:25.861 14:48:11 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:25.861 14:48:11 -- common/autotest_common.sh@1366 -- # local bs 00:07:25.861 14:48:11 -- common/autotest_common.sh@1367 -- # local nb 00:07:25.861 14:48:11 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:25.861 14:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.861 14:48:11 -- common/autotest_common.sh@10 -- # set +x 00:07:25.861 14:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.861 14:48:11 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:25.861 { 00:07:25.861 "name": "Malloc1", 00:07:25.861 "aliases": [ 00:07:25.861 "3538ff9a-aaa4-466b-bafe-ffed6ea20977" 00:07:25.861 ], 00:07:25.861 "product_name": "Malloc disk", 00:07:25.861 "block_size": 512, 00:07:25.861 "num_blocks": 1048576, 00:07:25.861 "uuid": "3538ff9a-aaa4-466b-bafe-ffed6ea20977", 00:07:25.861 "assigned_rate_limits": { 00:07:25.861 "rw_ios_per_sec": 0, 00:07:25.861 "rw_mbytes_per_sec": 0, 00:07:25.861 "r_mbytes_per_sec": 0, 00:07:25.861 "w_mbytes_per_sec": 0 00:07:25.861 }, 00:07:25.861 "claimed": true, 00:07:25.861 "claim_type": "exclusive_write", 00:07:25.861 "zoned": false, 00:07:25.861 "supported_io_types": { 00:07:25.861 "read": true, 00:07:25.861 "write": true, 00:07:25.861 "unmap": true, 00:07:25.861 "write_zeroes": true, 00:07:25.861 "flush": true, 00:07:25.861 "reset": true, 00:07:25.861 "compare": false, 00:07:25.861 "compare_and_write": false, 00:07:25.861 "abort": true, 00:07:25.861 "nvme_admin": false, 00:07:25.861 "nvme_io": false 00:07:25.861 }, 00:07:25.861 "memory_domains": [ 00:07:25.861 { 00:07:25.861 "dma_device_id": "system", 00:07:25.861 "dma_device_type": 1 00:07:25.861 }, 00:07:25.861 { 00:07:25.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.861 "dma_device_type": 2 00:07:25.861 } 00:07:25.861 ], 00:07:25.861 "driver_specific": {} 00:07:25.861 } 00:07:25.861 ]' 00:07:25.861 14:48:11 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:25.861 14:48:11 -- common/autotest_common.sh@1369 -- # bs=512 00:07:25.861 14:48:11 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:25.861 14:48:11 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:25.861 14:48:11 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:25.861 14:48:11 -- common/autotest_common.sh@1374 -- # echo 512 00:07:25.861 14:48:11 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:25.861 14:48:11 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:26.792 14:48:12 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:26.792 14:48:12 -- common/autotest_common.sh@1184 -- # local i=0 00:07:26.792 14:48:12 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:26.792 14:48:12 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:26.792 14:48:12 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:28.688 14:48:14 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:28.688 14:48:14 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:28.688 14:48:14 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:28.688 14:48:14 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:28.688 14:48:14 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:28.688 14:48:14 -- common/autotest_common.sh@1194 -- # return 0 00:07:28.688 14:48:14 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:28.688 14:48:14 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:28.688 14:48:14 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:28.688 14:48:14 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:28.688 14:48:14 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:28.688 14:48:14 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:28.688 14:48:14 -- setup/common.sh@80 -- # echo 536870912 00:07:28.688 14:48:14 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:28.688 14:48:14 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:28.688 14:48:14 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:28.688 14:48:14 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:28.946 14:48:14 -- target/filesystem.sh@69 -- # partprobe 00:07:29.510 14:48:15 -- target/filesystem.sh@70 -- # sleep 1 00:07:30.442 14:48:16 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:30.442 14:48:16 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:30.442 14:48:16 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:30.442 14:48:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.442 14:48:16 -- common/autotest_common.sh@10 -- # set +x 00:07:30.700 ************************************ 00:07:30.700 START TEST filesystem_in_capsule_ext4 00:07:30.700 ************************************ 00:07:30.700 14:48:16 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:30.700 14:48:16 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:30.700 14:48:16 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:30.700 14:48:16 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:30.700 14:48:16 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:30.700 14:48:16 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:30.700 14:48:16 -- common/autotest_common.sh@914 -- # local i=0 00:07:30.700 14:48:16 -- common/autotest_common.sh@915 -- # local force 00:07:30.700 14:48:16 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:30.700 14:48:16 -- common/autotest_common.sh@918 -- # force=-F 00:07:30.700 14:48:16 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:30.700 mke2fs 1.46.5 (30-Dec-2021) 00:07:30.700 Discarding device blocks: 0/522240 done 00:07:30.700 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:30.700 Filesystem UUID: 592efc83-c87f-4ae2-975f-37b2cf5cf92c 00:07:30.700 Superblock backups stored on blocks: 00:07:30.700 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:30.700 00:07:30.700 Allocating group tables: 0/64 done 00:07:30.700 Writing inode tables: 0/64 done 00:07:30.958 Creating journal (8192 blocks): done 00:07:31.779 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:07:31.779 00:07:31.779 14:48:17 -- common/autotest_common.sh@931 -- # return 0 00:07:31.779 14:48:17 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:32.344 14:48:17 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:32.344 14:48:17 -- target/filesystem.sh@25 -- # sync 00:07:32.344 14:48:17 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:32.344 14:48:17 -- target/filesystem.sh@27 -- # sync 00:07:32.344 14:48:17 -- target/filesystem.sh@29 -- # i=0 00:07:32.344 14:48:17 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:32.344 14:48:17 -- target/filesystem.sh@37 -- # kill -0 3671641 00:07:32.344 14:48:17 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:32.344 14:48:17 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:32.344 14:48:17 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:32.344 14:48:17 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:32.344 00:07:32.344 real 0m1.781s 00:07:32.344 user 0m0.020s 00:07:32.344 sys 0m0.032s 00:07:32.344 14:48:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:32.344 14:48:17 -- common/autotest_common.sh@10 -- # set +x 00:07:32.344 ************************************ 00:07:32.344 END TEST filesystem_in_capsule_ext4 00:07:32.344 ************************************ 00:07:32.344 14:48:17 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:32.344 14:48:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:32.344 14:48:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.344 14:48:17 -- common/autotest_common.sh@10 -- # set +x 00:07:32.602 ************************************ 00:07:32.602 START TEST filesystem_in_capsule_btrfs 00:07:32.602 ************************************ 00:07:32.602 14:48:18 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:32.602 14:48:18 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:32.602 14:48:18 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:32.602 14:48:18 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:32.602 14:48:18 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:32.602 14:48:18 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:32.602 14:48:18 -- common/autotest_common.sh@914 -- # local i=0 00:07:32.602 14:48:18 -- common/autotest_common.sh@915 -- # local force 00:07:32.602 14:48:18 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:32.602 14:48:18 -- common/autotest_common.sh@920 -- # force=-f 00:07:32.602 14:48:18 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:32.602 btrfs-progs v6.6.2 00:07:32.602 See https://btrfs.readthedocs.io for more information. 00:07:32.602 00:07:32.602 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:32.602 NOTE: several default settings have changed in version 5.15, please make sure 00:07:32.602 this does not affect your deployments: 00:07:32.602 - DUP for metadata (-m dup) 00:07:32.602 - enabled no-holes (-O no-holes) 00:07:32.602 - enabled free-space-tree (-R free-space-tree) 00:07:32.602 00:07:32.602 Label: (null) 00:07:32.602 UUID: da7d579b-49d3-456b-80a8-8f4117a5d90e 00:07:32.602 Node size: 16384 00:07:32.602 Sector size: 4096 00:07:32.602 Filesystem size: 510.00MiB 00:07:32.602 Block group profiles: 00:07:32.602 Data: single 8.00MiB 00:07:32.602 Metadata: DUP 32.00MiB 00:07:32.602 System: DUP 8.00MiB 00:07:32.602 SSD detected: yes 00:07:32.602 Zoned device: no 00:07:32.602 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:32.602 Runtime features: free-space-tree 00:07:32.602 Checksum: crc32c 00:07:32.602 Number of devices: 1 00:07:32.602 Devices: 00:07:32.602 ID SIZE PATH 00:07:32.602 1 510.00MiB /dev/nvme0n1p1 00:07:32.602 00:07:32.602 14:48:18 -- common/autotest_common.sh@931 -- # return 0 00:07:32.602 14:48:18 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:33.166 14:48:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:33.166 14:48:18 -- target/filesystem.sh@25 -- # sync 00:07:33.166 14:48:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:33.166 14:48:18 -- target/filesystem.sh@27 -- # sync 00:07:33.166 14:48:18 -- target/filesystem.sh@29 -- # i=0 00:07:33.166 14:48:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:33.166 14:48:18 -- target/filesystem.sh@37 -- # kill -0 3671641 00:07:33.166 14:48:18 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:33.166 14:48:18 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:33.166 14:48:18 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:33.166 14:48:18 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:33.166 00:07:33.166 real 0m0.631s 00:07:33.166 user 0m0.012s 00:07:33.166 sys 0m0.040s 00:07:33.166 14:48:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:33.166 14:48:18 -- common/autotest_common.sh@10 -- # set +x 00:07:33.166 ************************************ 00:07:33.166 END TEST filesystem_in_capsule_btrfs 00:07:33.166 ************************************ 00:07:33.166 14:48:18 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:33.166 14:48:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:33.166 14:48:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.166 14:48:18 -- common/autotest_common.sh@10 -- # set +x 00:07:33.166 ************************************ 00:07:33.166 START TEST filesystem_in_capsule_xfs 00:07:33.166 ************************************ 00:07:33.166 14:48:18 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:33.166 14:48:18 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:33.166 14:48:18 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:33.166 14:48:18 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:33.166 14:48:18 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:33.166 14:48:18 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:33.166 14:48:18 -- common/autotest_common.sh@914 -- # local i=0 00:07:33.166 14:48:18 -- common/autotest_common.sh@915 -- # local force 00:07:33.166 14:48:18 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:33.166 14:48:18 -- common/autotest_common.sh@920 -- # force=-f 00:07:33.166 14:48:18 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:33.424 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:33.424 = sectsz=512 attr=2, projid32bit=1 00:07:33.424 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:33.424 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:33.424 data = bsize=4096 blocks=130560, imaxpct=25 00:07:33.424 = sunit=0 swidth=0 blks 00:07:33.424 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:33.424 log =internal log bsize=4096 blocks=16384, version=2 00:07:33.424 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:33.424 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:34.354 Discarding blocks...Done. 00:07:34.354 14:48:19 -- common/autotest_common.sh@931 -- # return 0 00:07:34.354 14:48:19 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:36.878 14:48:22 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:36.878 14:48:22 -- target/filesystem.sh@25 -- # sync 00:07:36.878 14:48:22 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:36.878 14:48:22 -- target/filesystem.sh@27 -- # sync 00:07:36.878 14:48:22 -- target/filesystem.sh@29 -- # i=0 00:07:36.878 14:48:22 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:36.878 14:48:22 -- target/filesystem.sh@37 -- # kill -0 3671641 00:07:36.878 14:48:22 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:36.878 14:48:22 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:36.878 14:48:22 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:36.878 14:48:22 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:36.878 00:07:36.878 real 0m3.360s 00:07:36.878 user 0m0.012s 00:07:36.878 sys 0m0.043s 00:07:36.878 14:48:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:36.878 14:48:22 -- common/autotest_common.sh@10 -- # set +x 00:07:36.878 ************************************ 00:07:36.878 END TEST filesystem_in_capsule_xfs 00:07:36.878 ************************************ 00:07:36.878 14:48:22 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:36.878 14:48:22 -- target/filesystem.sh@93 -- # sync 00:07:36.878 14:48:22 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:36.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:36.878 14:48:22 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:36.878 14:48:22 -- common/autotest_common.sh@1205 -- # local i=0 00:07:36.878 14:48:22 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:36.878 14:48:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:36.878 14:48:22 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:36.878 14:48:22 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:36.878 14:48:22 -- common/autotest_common.sh@1217 -- # return 0 00:07:36.878 14:48:22 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:36.878 14:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:36.878 14:48:22 -- common/autotest_common.sh@10 -- # set +x 00:07:36.878 14:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:36.878 14:48:22 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:36.878 14:48:22 -- target/filesystem.sh@101 -- # killprocess 3671641 00:07:36.878 14:48:22 -- common/autotest_common.sh@936 -- # '[' -z 3671641 ']' 00:07:36.878 14:48:22 -- common/autotest_common.sh@940 -- # kill -0 3671641 00:07:36.878 14:48:22 -- common/autotest_common.sh@941 -- # uname 00:07:36.878 14:48:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:36.878 14:48:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3671641 00:07:36.878 14:48:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:36.878 14:48:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:36.878 14:48:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3671641' 00:07:36.878 killing process with pid 3671641 00:07:36.878 14:48:22 -- common/autotest_common.sh@955 -- # kill 3671641 00:07:36.878 14:48:22 -- common/autotest_common.sh@960 -- # wait 3671641 00:07:37.136 14:48:22 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:37.136 00:07:37.136 real 0m11.842s 00:07:37.136 user 0m45.386s 00:07:37.136 sys 0m1.858s 00:07:37.136 14:48:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:37.136 14:48:22 -- common/autotest_common.sh@10 -- # set +x 00:07:37.136 ************************************ 00:07:37.136 END TEST nvmf_filesystem_in_capsule 00:07:37.136 ************************************ 00:07:37.136 14:48:22 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:37.136 14:48:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:37.136 14:48:22 -- nvmf/common.sh@117 -- # sync 00:07:37.136 14:48:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:37.136 14:48:22 -- nvmf/common.sh@120 -- # set +e 00:07:37.136 14:48:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:37.136 14:48:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:37.136 rmmod nvme_tcp 00:07:37.136 rmmod nvme_fabrics 00:07:37.136 rmmod nvme_keyring 00:07:37.136 14:48:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:37.136 14:48:22 -- nvmf/common.sh@124 -- # set -e 00:07:37.136 14:48:22 -- nvmf/common.sh@125 -- # return 0 00:07:37.136 14:48:22 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:07:37.136 14:48:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:37.136 14:48:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:37.136 14:48:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:37.136 14:48:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:37.136 14:48:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:37.136 14:48:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.136 14:48:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.136 14:48:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.706 14:48:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:39.706 00:07:39.706 real 0m28.332s 00:07:39.706 user 1m31.886s 00:07:39.706 sys 0m5.337s 00:07:39.706 14:48:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:39.706 14:48:24 -- common/autotest_common.sh@10 -- # set +x 00:07:39.706 ************************************ 00:07:39.706 END TEST nvmf_filesystem 00:07:39.706 ************************************ 00:07:39.706 14:48:24 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:39.706 14:48:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:39.706 14:48:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.706 14:48:24 -- common/autotest_common.sh@10 -- # set +x 00:07:39.706 ************************************ 00:07:39.706 START TEST nvmf_discovery 00:07:39.706 ************************************ 00:07:39.706 14:48:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:39.706 * Looking for test storage... 00:07:39.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.706 14:48:25 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.706 14:48:25 -- nvmf/common.sh@7 -- # uname -s 00:07:39.706 14:48:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.706 14:48:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.706 14:48:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.706 14:48:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.706 14:48:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.706 14:48:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.706 14:48:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.706 14:48:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.706 14:48:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.706 14:48:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.706 14:48:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:39.706 14:48:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:39.706 14:48:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.706 14:48:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.706 14:48:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.706 14:48:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.706 14:48:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.706 14:48:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.706 14:48:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.706 14:48:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.706 14:48:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.706 14:48:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.706 14:48:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.706 14:48:25 -- paths/export.sh@5 -- # export PATH 00:07:39.706 14:48:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.706 14:48:25 -- nvmf/common.sh@47 -- # : 0 00:07:39.706 14:48:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:39.706 14:48:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:39.706 14:48:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.706 14:48:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.706 14:48:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.706 14:48:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:39.706 14:48:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:39.706 14:48:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:39.706 14:48:25 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:39.706 14:48:25 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:39.706 14:48:25 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:39.706 14:48:25 -- target/discovery.sh@15 -- # hash nvme 00:07:39.706 14:48:25 -- target/discovery.sh@20 -- # nvmftestinit 00:07:39.706 14:48:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:39.706 14:48:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.706 14:48:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:39.706 14:48:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:39.706 14:48:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:39.706 14:48:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.706 14:48:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.706 14:48:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.706 14:48:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:39.706 14:48:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:39.706 14:48:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:39.706 14:48:25 -- common/autotest_common.sh@10 -- # set +x 00:07:41.605 14:48:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:41.605 14:48:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:41.605 14:48:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:41.605 14:48:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:41.605 14:48:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:41.605 14:48:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:41.605 14:48:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:41.605 14:48:27 -- nvmf/common.sh@295 -- # net_devs=() 00:07:41.605 14:48:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:41.605 14:48:27 -- nvmf/common.sh@296 -- # e810=() 00:07:41.605 14:48:27 -- nvmf/common.sh@296 -- # local -ga e810 00:07:41.605 14:48:27 -- nvmf/common.sh@297 -- # x722=() 00:07:41.605 14:48:27 -- nvmf/common.sh@297 -- # local -ga x722 00:07:41.605 14:48:27 -- nvmf/common.sh@298 -- # mlx=() 00:07:41.605 14:48:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:41.605 14:48:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.605 14:48:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.605 14:48:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.605 14:48:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.605 14:48:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.605 14:48:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.605 14:48:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.605 14:48:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.605 14:48:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.605 14:48:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.605 14:48:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.605 14:48:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:41.605 14:48:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:41.605 14:48:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:41.605 14:48:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:41.605 14:48:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:41.605 14:48:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:41.605 14:48:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.605 14:48:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:41.605 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:41.605 14:48:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.605 14:48:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.605 14:48:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.605 14:48:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.605 14:48:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.605 14:48:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.605 14:48:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:41.605 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:41.605 14:48:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.605 14:48:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.605 14:48:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.605 14:48:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.605 14:48:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.605 14:48:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:41.605 14:48:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:41.605 14:48:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:41.605 14:48:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.605 14:48:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.605 14:48:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:41.605 14:48:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.605 14:48:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:41.605 Found net devices under 0000:84:00.0: cvl_0_0 00:07:41.605 14:48:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.605 14:48:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.605 14:48:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.605 14:48:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:41.605 14:48:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.605 14:48:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:41.605 Found net devices under 0000:84:00.1: cvl_0_1 00:07:41.605 14:48:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.605 14:48:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:41.605 14:48:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:41.605 14:48:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:41.605 14:48:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:41.605 14:48:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:41.605 14:48:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.605 14:48:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.605 14:48:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.605 14:48:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:41.605 14:48:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.605 14:48:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.605 14:48:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:41.605 14:48:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.605 14:48:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.605 14:48:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:41.605 14:48:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:41.605 14:48:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.605 14:48:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.605 14:48:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.605 14:48:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.605 14:48:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:41.605 14:48:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.605 14:48:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.605 14:48:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.605 14:48:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:41.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:07:41.605 00:07:41.605 --- 10.0.0.2 ping statistics --- 00:07:41.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.605 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:07:41.605 14:48:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:07:41.605 00:07:41.605 --- 10.0.0.1 ping statistics --- 00:07:41.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.606 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:07:41.606 14:48:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.606 14:48:27 -- nvmf/common.sh@411 -- # return 0 00:07:41.606 14:48:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:41.606 14:48:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.606 14:48:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:41.606 14:48:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:41.606 14:48:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.606 14:48:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:41.606 14:48:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:41.606 14:48:27 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:41.606 14:48:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:41.606 14:48:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:41.606 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:41.606 14:48:27 -- nvmf/common.sh@470 -- # nvmfpid=3675217 00:07:41.606 14:48:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:41.606 14:48:27 -- nvmf/common.sh@471 -- # waitforlisten 3675217 00:07:41.606 14:48:27 -- common/autotest_common.sh@817 -- # '[' -z 3675217 ']' 00:07:41.606 14:48:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.606 14:48:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:41.606 14:48:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.606 14:48:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:41.606 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:41.606 [2024-04-26 14:48:27.212180] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:07:41.606 [2024-04-26 14:48:27.212248] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.606 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.606 [2024-04-26 14:48:27.248855] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:41.606 [2024-04-26 14:48:27.282580] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.864 [2024-04-26 14:48:27.364897] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.864 [2024-04-26 14:48:27.364953] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.864 [2024-04-26 14:48:27.364983] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.864 [2024-04-26 14:48:27.364994] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.864 [2024-04-26 14:48:27.365004] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.864 [2024-04-26 14:48:27.365068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.864 [2024-04-26 14:48:27.365183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.864 [2024-04-26 14:48:27.365251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.864 [2024-04-26 14:48:27.365248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.864 14:48:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:41.864 14:48:27 -- common/autotest_common.sh@850 -- # return 0 00:07:41.864 14:48:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:41.864 14:48:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:41.864 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:41.864 14:48:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.864 14:48:27 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:41.864 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.864 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:41.864 [2024-04-26 14:48:27.512857] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.864 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.864 14:48:27 -- target/discovery.sh@26 -- # seq 1 4 00:07:41.864 14:48:27 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:41.864 14:48:27 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:41.864 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.864 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:41.864 Null1 00:07:41.865 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.865 14:48:27 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:41.865 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.865 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.865 14:48:27 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:41.865 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.865 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.865 14:48:27 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.865 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.865 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 [2024-04-26 14:48:27.553223] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.865 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.865 14:48:27 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:41.865 14:48:27 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:41.865 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.865 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 Null2 00:07:41.865 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.865 14:48:27 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:41.865 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.865 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.865 14:48:27 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:41.865 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.865 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.865 14:48:27 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:41.865 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.865 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.865 14:48:27 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:41.865 14:48:27 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:41.865 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.865 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 Null3 00:07:41.865 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.865 14:48:27 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:41.865 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.865 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.122 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.122 14:48:27 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:42.122 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.122 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.122 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.122 14:48:27 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:42.122 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.122 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.122 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.122 14:48:27 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:42.122 14:48:27 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:42.122 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.122 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.122 Null4 00:07:42.122 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.122 14:48:27 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:42.122 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.122 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.122 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.122 14:48:27 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:42.122 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.122 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.122 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.123 14:48:27 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:42.123 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.123 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.123 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.123 14:48:27 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:42.123 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.123 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.123 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.123 14:48:27 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:42.123 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.123 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.123 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.123 14:48:27 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:07:42.123 00:07:42.123 Discovery Log Number of Records 6, Generation counter 6 00:07:42.123 =====Discovery Log Entry 0====== 00:07:42.123 trtype: tcp 00:07:42.123 adrfam: ipv4 00:07:42.123 subtype: current discovery subsystem 00:07:42.123 treq: not required 00:07:42.123 portid: 0 00:07:42.123 trsvcid: 4420 00:07:42.123 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:42.123 traddr: 10.0.0.2 00:07:42.123 eflags: explicit discovery connections, duplicate discovery information 00:07:42.123 sectype: none 00:07:42.123 =====Discovery Log Entry 1====== 00:07:42.123 trtype: tcp 00:07:42.123 adrfam: ipv4 00:07:42.123 subtype: nvme subsystem 00:07:42.123 treq: not required 00:07:42.123 portid: 0 00:07:42.123 trsvcid: 4420 00:07:42.123 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:42.123 traddr: 10.0.0.2 00:07:42.123 eflags: none 00:07:42.123 sectype: none 00:07:42.123 =====Discovery Log Entry 2====== 00:07:42.123 trtype: tcp 00:07:42.123 adrfam: ipv4 00:07:42.123 subtype: nvme subsystem 00:07:42.123 treq: not required 00:07:42.123 portid: 0 00:07:42.123 trsvcid: 4420 00:07:42.123 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:42.123 traddr: 10.0.0.2 00:07:42.123 eflags: none 00:07:42.123 sectype: none 00:07:42.123 =====Discovery Log Entry 3====== 00:07:42.123 trtype: tcp 00:07:42.123 adrfam: ipv4 00:07:42.123 subtype: nvme subsystem 00:07:42.123 treq: not required 00:07:42.123 portid: 0 00:07:42.123 trsvcid: 4420 00:07:42.123 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:42.123 traddr: 10.0.0.2 00:07:42.123 eflags: none 00:07:42.123 sectype: none 00:07:42.123 =====Discovery Log Entry 4====== 00:07:42.123 trtype: tcp 00:07:42.123 adrfam: ipv4 00:07:42.123 subtype: nvme subsystem 00:07:42.123 treq: not required 00:07:42.123 portid: 0 00:07:42.123 trsvcid: 4420 00:07:42.123 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:42.123 traddr: 10.0.0.2 00:07:42.123 eflags: none 00:07:42.123 sectype: none 00:07:42.123 =====Discovery Log Entry 5====== 00:07:42.123 trtype: tcp 00:07:42.123 adrfam: ipv4 00:07:42.123 subtype: discovery subsystem referral 00:07:42.123 treq: not required 00:07:42.123 portid: 0 00:07:42.123 trsvcid: 4430 00:07:42.123 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:42.123 traddr: 10.0.0.2 00:07:42.123 eflags: none 00:07:42.123 sectype: none 00:07:42.123 14:48:27 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:42.123 Perform nvmf subsystem discovery via RPC 00:07:42.123 14:48:27 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:42.123 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.123 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.123 [2024-04-26 14:48:27.857985] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:42.123 [ 00:07:42.123 { 00:07:42.123 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:42.123 "subtype": "Discovery", 00:07:42.123 "listen_addresses": [ 00:07:42.123 { 00:07:42.123 "transport": "TCP", 00:07:42.381 "trtype": "TCP", 00:07:42.381 "adrfam": "IPv4", 00:07:42.381 "traddr": "10.0.0.2", 00:07:42.381 "trsvcid": "4420" 00:07:42.381 } 00:07:42.381 ], 00:07:42.381 "allow_any_host": true, 00:07:42.381 "hosts": [] 00:07:42.381 }, 00:07:42.381 { 00:07:42.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:42.381 "subtype": "NVMe", 00:07:42.381 "listen_addresses": [ 00:07:42.381 { 00:07:42.381 "transport": "TCP", 00:07:42.381 "trtype": "TCP", 00:07:42.381 "adrfam": "IPv4", 00:07:42.381 "traddr": "10.0.0.2", 00:07:42.381 "trsvcid": "4420" 00:07:42.381 } 00:07:42.381 ], 00:07:42.381 "allow_any_host": true, 00:07:42.381 "hosts": [], 00:07:42.381 "serial_number": "SPDK00000000000001", 00:07:42.381 "model_number": "SPDK bdev Controller", 00:07:42.381 "max_namespaces": 32, 00:07:42.381 "min_cntlid": 1, 00:07:42.381 "max_cntlid": 65519, 00:07:42.381 "namespaces": [ 00:07:42.381 { 00:07:42.381 "nsid": 1, 00:07:42.381 "bdev_name": "Null1", 00:07:42.381 "name": "Null1", 00:07:42.381 "nguid": "703C7073486F41AD9CC21072ACF4369F", 00:07:42.381 "uuid": "703c7073-486f-41ad-9cc2-1072acf4369f" 00:07:42.381 } 00:07:42.381 ] 00:07:42.381 }, 00:07:42.381 { 00:07:42.381 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:42.381 "subtype": "NVMe", 00:07:42.381 "listen_addresses": [ 00:07:42.381 { 00:07:42.381 "transport": "TCP", 00:07:42.381 "trtype": "TCP", 00:07:42.381 "adrfam": "IPv4", 00:07:42.381 "traddr": "10.0.0.2", 00:07:42.381 "trsvcid": "4420" 00:07:42.381 } 00:07:42.381 ], 00:07:42.381 "allow_any_host": true, 00:07:42.381 "hosts": [], 00:07:42.381 "serial_number": "SPDK00000000000002", 00:07:42.381 "model_number": "SPDK bdev Controller", 00:07:42.381 "max_namespaces": 32, 00:07:42.381 "min_cntlid": 1, 00:07:42.381 "max_cntlid": 65519, 00:07:42.381 "namespaces": [ 00:07:42.381 { 00:07:42.381 "nsid": 1, 00:07:42.381 "bdev_name": "Null2", 00:07:42.381 "name": "Null2", 00:07:42.381 "nguid": "F6AFE0BAF2F44216878212F14BB862E5", 00:07:42.381 "uuid": "f6afe0ba-f2f4-4216-8782-12f14bb862e5" 00:07:42.381 } 00:07:42.381 ] 00:07:42.381 }, 00:07:42.381 { 00:07:42.381 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:42.381 "subtype": "NVMe", 00:07:42.381 "listen_addresses": [ 00:07:42.381 { 00:07:42.381 "transport": "TCP", 00:07:42.381 "trtype": "TCP", 00:07:42.381 "adrfam": "IPv4", 00:07:42.381 "traddr": "10.0.0.2", 00:07:42.381 "trsvcid": "4420" 00:07:42.381 } 00:07:42.381 ], 00:07:42.381 "allow_any_host": true, 00:07:42.381 "hosts": [], 00:07:42.381 "serial_number": "SPDK00000000000003", 00:07:42.381 "model_number": "SPDK bdev Controller", 00:07:42.381 "max_namespaces": 32, 00:07:42.381 "min_cntlid": 1, 00:07:42.381 "max_cntlid": 65519, 00:07:42.381 "namespaces": [ 00:07:42.381 { 00:07:42.381 "nsid": 1, 00:07:42.381 "bdev_name": "Null3", 00:07:42.381 "name": "Null3", 00:07:42.381 "nguid": "638CA9F927C0498D973E99F5FFE1D197", 00:07:42.381 "uuid": "638ca9f9-27c0-498d-973e-99f5ffe1d197" 00:07:42.381 } 00:07:42.381 ] 00:07:42.381 }, 00:07:42.381 { 00:07:42.381 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:42.381 "subtype": "NVMe", 00:07:42.381 "listen_addresses": [ 00:07:42.381 { 00:07:42.381 "transport": "TCP", 00:07:42.381 "trtype": "TCP", 00:07:42.381 "adrfam": "IPv4", 00:07:42.381 "traddr": "10.0.0.2", 00:07:42.381 "trsvcid": "4420" 00:07:42.381 } 00:07:42.381 ], 00:07:42.381 "allow_any_host": true, 00:07:42.381 "hosts": [], 00:07:42.381 "serial_number": "SPDK00000000000004", 00:07:42.381 "model_number": "SPDK bdev Controller", 00:07:42.381 "max_namespaces": 32, 00:07:42.381 "min_cntlid": 1, 00:07:42.381 "max_cntlid": 65519, 00:07:42.381 "namespaces": [ 00:07:42.381 { 00:07:42.381 "nsid": 1, 00:07:42.381 "bdev_name": "Null4", 00:07:42.381 "name": "Null4", 00:07:42.381 "nguid": "98B8B8986E8E458D9A4403F494031594", 00:07:42.381 "uuid": "98b8b898-6e8e-458d-9a44-03f494031594" 00:07:42.381 } 00:07:42.381 ] 00:07:42.381 } 00:07:42.381 ] 00:07:42.381 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.381 14:48:27 -- target/discovery.sh@42 -- # seq 1 4 00:07:42.381 14:48:27 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:42.381 14:48:27 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:42.381 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.381 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.381 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.381 14:48:27 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:42.381 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.381 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.381 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.381 14:48:27 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:42.381 14:48:27 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:42.381 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.381 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.381 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.381 14:48:27 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:42.381 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.381 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.381 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.381 14:48:27 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:42.381 14:48:27 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:42.381 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.381 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.381 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.381 14:48:27 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:42.381 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.381 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.381 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.381 14:48:27 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:42.381 14:48:27 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:42.381 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.381 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.381 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.381 14:48:27 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:42.381 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.381 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.381 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.381 14:48:27 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:42.381 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.381 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.381 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.381 14:48:27 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:42.381 14:48:27 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:42.381 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.381 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:42.382 14:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.382 14:48:27 -- target/discovery.sh@49 -- # check_bdevs= 00:07:42.382 14:48:27 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:42.382 14:48:27 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:42.382 14:48:27 -- target/discovery.sh@57 -- # nvmftestfini 00:07:42.382 14:48:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:42.382 14:48:27 -- nvmf/common.sh@117 -- # sync 00:07:42.382 14:48:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:42.382 14:48:27 -- nvmf/common.sh@120 -- # set +e 00:07:42.382 14:48:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:42.382 14:48:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:42.382 rmmod nvme_tcp 00:07:42.382 rmmod nvme_fabrics 00:07:42.382 rmmod nvme_keyring 00:07:42.382 14:48:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:42.382 14:48:28 -- nvmf/common.sh@124 -- # set -e 00:07:42.382 14:48:28 -- nvmf/common.sh@125 -- # return 0 00:07:42.382 14:48:28 -- nvmf/common.sh@478 -- # '[' -n 3675217 ']' 00:07:42.382 14:48:28 -- nvmf/common.sh@479 -- # killprocess 3675217 00:07:42.382 14:48:28 -- common/autotest_common.sh@936 -- # '[' -z 3675217 ']' 00:07:42.382 14:48:28 -- common/autotest_common.sh@940 -- # kill -0 3675217 00:07:42.382 14:48:28 -- common/autotest_common.sh@941 -- # uname 00:07:42.382 14:48:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:42.382 14:48:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3675217 00:07:42.382 14:48:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:42.382 14:48:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:42.382 14:48:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3675217' 00:07:42.382 killing process with pid 3675217 00:07:42.382 14:48:28 -- common/autotest_common.sh@955 -- # kill 3675217 00:07:42.382 [2024-04-26 14:48:28.063392] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:42.382 14:48:28 -- common/autotest_common.sh@960 -- # wait 3675217 00:07:42.640 14:48:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:42.640 14:48:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:42.640 14:48:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:42.640 14:48:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:42.640 14:48:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:42.640 14:48:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.640 14:48:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.640 14:48:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.172 14:48:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:45.172 00:07:45.172 real 0m5.317s 00:07:45.172 user 0m4.424s 00:07:45.172 sys 0m1.783s 00:07:45.172 14:48:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:45.172 14:48:30 -- common/autotest_common.sh@10 -- # set +x 00:07:45.172 ************************************ 00:07:45.172 END TEST nvmf_discovery 00:07:45.172 ************************************ 00:07:45.172 14:48:30 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:45.172 14:48:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:45.172 14:48:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.172 14:48:30 -- common/autotest_common.sh@10 -- # set +x 00:07:45.172 ************************************ 00:07:45.172 START TEST nvmf_referrals 00:07:45.172 ************************************ 00:07:45.172 14:48:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:45.172 * Looking for test storage... 00:07:45.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.172 14:48:30 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.172 14:48:30 -- nvmf/common.sh@7 -- # uname -s 00:07:45.172 14:48:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.172 14:48:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.172 14:48:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.172 14:48:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.172 14:48:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.172 14:48:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.172 14:48:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.172 14:48:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.172 14:48:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.172 14:48:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.172 14:48:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:45.172 14:48:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:45.172 14:48:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.172 14:48:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.172 14:48:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.172 14:48:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.172 14:48:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.172 14:48:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.172 14:48:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.172 14:48:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.173 14:48:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.173 14:48:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.173 14:48:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.173 14:48:30 -- paths/export.sh@5 -- # export PATH 00:07:45.173 14:48:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.173 14:48:30 -- nvmf/common.sh@47 -- # : 0 00:07:45.173 14:48:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.173 14:48:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.173 14:48:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.173 14:48:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.173 14:48:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.173 14:48:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.173 14:48:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.173 14:48:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.173 14:48:30 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:45.173 14:48:30 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:45.173 14:48:30 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:45.173 14:48:30 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:45.173 14:48:30 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:45.173 14:48:30 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:45.173 14:48:30 -- target/referrals.sh@37 -- # nvmftestinit 00:07:45.173 14:48:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:45.173 14:48:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.173 14:48:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:45.173 14:48:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:45.173 14:48:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:45.173 14:48:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.173 14:48:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.173 14:48:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.173 14:48:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:45.173 14:48:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:45.173 14:48:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:45.173 14:48:30 -- common/autotest_common.sh@10 -- # set +x 00:07:47.074 14:48:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:47.074 14:48:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:47.074 14:48:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:47.074 14:48:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:47.074 14:48:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:47.074 14:48:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:47.074 14:48:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:47.074 14:48:32 -- nvmf/common.sh@295 -- # net_devs=() 00:07:47.074 14:48:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:47.074 14:48:32 -- nvmf/common.sh@296 -- # e810=() 00:07:47.074 14:48:32 -- nvmf/common.sh@296 -- # local -ga e810 00:07:47.074 14:48:32 -- nvmf/common.sh@297 -- # x722=() 00:07:47.074 14:48:32 -- nvmf/common.sh@297 -- # local -ga x722 00:07:47.074 14:48:32 -- nvmf/common.sh@298 -- # mlx=() 00:07:47.074 14:48:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:47.074 14:48:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.074 14:48:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.074 14:48:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.074 14:48:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.074 14:48:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.074 14:48:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.074 14:48:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.074 14:48:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.074 14:48:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.074 14:48:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.074 14:48:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.074 14:48:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:47.074 14:48:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:47.074 14:48:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:47.074 14:48:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:47.074 14:48:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:47.074 14:48:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:47.074 14:48:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:47.074 14:48:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:47.074 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:47.074 14:48:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:47.074 14:48:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:47.074 14:48:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.074 14:48:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.074 14:48:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:47.074 14:48:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:47.074 14:48:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:47.074 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:47.074 14:48:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:47.074 14:48:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:47.074 14:48:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.074 14:48:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.074 14:48:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:47.074 14:48:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:47.074 14:48:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:47.074 14:48:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:47.074 14:48:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:47.074 14:48:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.074 14:48:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:47.074 14:48:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.074 14:48:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:47.074 Found net devices under 0000:84:00.0: cvl_0_0 00:07:47.074 14:48:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.074 14:48:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:47.074 14:48:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.074 14:48:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:47.074 14:48:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.074 14:48:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:47.074 Found net devices under 0000:84:00.1: cvl_0_1 00:07:47.074 14:48:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.074 14:48:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:47.074 14:48:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:47.074 14:48:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:47.074 14:48:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:47.074 14:48:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:47.074 14:48:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.074 14:48:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.074 14:48:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:47.074 14:48:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:47.074 14:48:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:47.074 14:48:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:47.074 14:48:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:47.074 14:48:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:47.074 14:48:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.074 14:48:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:47.074 14:48:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:47.074 14:48:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:47.074 14:48:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:47.074 14:48:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:47.074 14:48:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:47.074 14:48:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:47.074 14:48:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:47.074 14:48:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:47.074 14:48:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:47.074 14:48:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:47.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:07:47.074 00:07:47.074 --- 10.0.0.2 ping statistics --- 00:07:47.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.074 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:07:47.074 14:48:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:47.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:07:47.074 00:07:47.074 --- 10.0.0.1 ping statistics --- 00:07:47.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.074 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:07:47.074 14:48:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.074 14:48:32 -- nvmf/common.sh@411 -- # return 0 00:07:47.074 14:48:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:47.074 14:48:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.074 14:48:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:47.074 14:48:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:47.074 14:48:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.074 14:48:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:47.074 14:48:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:47.074 14:48:32 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:47.074 14:48:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:47.074 14:48:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:47.074 14:48:32 -- common/autotest_common.sh@10 -- # set +x 00:07:47.074 14:48:32 -- nvmf/common.sh@470 -- # nvmfpid=3677330 00:07:47.074 14:48:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:47.074 14:48:32 -- nvmf/common.sh@471 -- # waitforlisten 3677330 00:07:47.074 14:48:32 -- common/autotest_common.sh@817 -- # '[' -z 3677330 ']' 00:07:47.074 14:48:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.074 14:48:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:47.074 14:48:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.074 14:48:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:47.075 14:48:32 -- common/autotest_common.sh@10 -- # set +x 00:07:47.075 [2024-04-26 14:48:32.805975] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:07:47.075 [2024-04-26 14:48:32.806060] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.333 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.333 [2024-04-26 14:48:32.844864] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:47.333 [2024-04-26 14:48:32.877181] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.333 [2024-04-26 14:48:32.968310] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.333 [2024-04-26 14:48:32.968380] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.333 [2024-04-26 14:48:32.968396] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.333 [2024-04-26 14:48:32.968410] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.333 [2024-04-26 14:48:32.968422] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.333 [2024-04-26 14:48:32.968512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.333 [2024-04-26 14:48:32.968565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.333 [2024-04-26 14:48:32.968630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.333 [2024-04-26 14:48:32.968633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.590 14:48:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:47.590 14:48:33 -- common/autotest_common.sh@850 -- # return 0 00:07:47.590 14:48:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:47.590 14:48:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:47.590 14:48:33 -- common/autotest_common.sh@10 -- # set +x 00:07:47.590 14:48:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.590 14:48:33 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:47.590 14:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:47.590 14:48:33 -- common/autotest_common.sh@10 -- # set +x 00:07:47.590 [2024-04-26 14:48:33.128085] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.590 14:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:47.590 14:48:33 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:47.590 14:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:47.590 14:48:33 -- common/autotest_common.sh@10 -- # set +x 00:07:47.590 [2024-04-26 14:48:33.140344] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:47.590 14:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:47.590 14:48:33 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:47.590 14:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:47.590 14:48:33 -- common/autotest_common.sh@10 -- # set +x 00:07:47.590 14:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:47.590 14:48:33 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:47.590 14:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:47.590 14:48:33 -- common/autotest_common.sh@10 -- # set +x 00:07:47.590 14:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:47.590 14:48:33 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:47.590 14:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:47.591 14:48:33 -- common/autotest_common.sh@10 -- # set +x 00:07:47.591 14:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:47.591 14:48:33 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:47.591 14:48:33 -- target/referrals.sh@48 -- # jq length 00:07:47.591 14:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:47.591 14:48:33 -- common/autotest_common.sh@10 -- # set +x 00:07:47.591 14:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:47.591 14:48:33 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:47.591 14:48:33 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:47.591 14:48:33 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:47.591 14:48:33 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:47.591 14:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:47.591 14:48:33 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:47.591 14:48:33 -- common/autotest_common.sh@10 -- # set +x 00:07:47.591 14:48:33 -- target/referrals.sh@21 -- # sort 00:07:47.591 14:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:47.591 14:48:33 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:47.591 14:48:33 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:47.591 14:48:33 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:47.591 14:48:33 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:47.591 14:48:33 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:47.591 14:48:33 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:47.591 14:48:33 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:47.591 14:48:33 -- target/referrals.sh@26 -- # sort 00:07:47.847 14:48:33 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:47.847 14:48:33 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:47.847 14:48:33 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:47.847 14:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:47.847 14:48:33 -- common/autotest_common.sh@10 -- # set +x 00:07:47.847 14:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:47.847 14:48:33 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:47.847 14:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:47.847 14:48:33 -- common/autotest_common.sh@10 -- # set +x 00:07:47.847 14:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:47.847 14:48:33 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:47.847 14:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:47.847 14:48:33 -- common/autotest_common.sh@10 -- # set +x 00:07:47.847 14:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:47.847 14:48:33 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:47.847 14:48:33 -- target/referrals.sh@56 -- # jq length 00:07:47.847 14:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:47.847 14:48:33 -- common/autotest_common.sh@10 -- # set +x 00:07:47.847 14:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:47.847 14:48:33 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:47.847 14:48:33 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:47.847 14:48:33 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:47.847 14:48:33 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:47.847 14:48:33 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:47.847 14:48:33 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:47.847 14:48:33 -- target/referrals.sh@26 -- # sort 00:07:48.104 14:48:33 -- target/referrals.sh@26 -- # echo 00:07:48.104 14:48:33 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:48.104 14:48:33 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:48.104 14:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:48.104 14:48:33 -- common/autotest_common.sh@10 -- # set +x 00:07:48.104 14:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:48.104 14:48:33 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:48.104 14:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:48.104 14:48:33 -- common/autotest_common.sh@10 -- # set +x 00:07:48.104 14:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:48.104 14:48:33 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:48.104 14:48:33 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:48.104 14:48:33 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:48.104 14:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:48.104 14:48:33 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:48.104 14:48:33 -- common/autotest_common.sh@10 -- # set +x 00:07:48.104 14:48:33 -- target/referrals.sh@21 -- # sort 00:07:48.104 14:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:48.104 14:48:33 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:48.104 14:48:33 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:48.104 14:48:33 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:48.104 14:48:33 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:48.104 14:48:33 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:48.104 14:48:33 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:48.104 14:48:33 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:48.104 14:48:33 -- target/referrals.sh@26 -- # sort 00:07:48.104 14:48:33 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:48.104 14:48:33 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:48.104 14:48:33 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:48.104 14:48:33 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:48.104 14:48:33 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:48.104 14:48:33 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:48.104 14:48:33 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:48.104 14:48:33 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:48.104 14:48:33 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:48.104 14:48:33 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:48.104 14:48:33 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:48.104 14:48:33 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:48.104 14:48:33 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:48.361 14:48:33 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:48.361 14:48:33 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:48.361 14:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:48.361 14:48:33 -- common/autotest_common.sh@10 -- # set +x 00:07:48.361 14:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:48.361 14:48:33 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:48.361 14:48:33 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:48.361 14:48:33 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:48.361 14:48:33 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:48.361 14:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:48.361 14:48:33 -- target/referrals.sh@21 -- # sort 00:07:48.361 14:48:33 -- common/autotest_common.sh@10 -- # set +x 00:07:48.361 14:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:48.361 14:48:33 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:48.361 14:48:33 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:48.361 14:48:33 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:48.361 14:48:33 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:48.361 14:48:33 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:48.362 14:48:33 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:48.362 14:48:33 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:48.362 14:48:33 -- target/referrals.sh@26 -- # sort 00:07:48.362 14:48:34 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:48.362 14:48:34 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:48.362 14:48:34 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:48.362 14:48:34 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:48.362 14:48:34 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:48.362 14:48:34 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:48.362 14:48:34 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:48.619 14:48:34 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:48.619 14:48:34 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:48.619 14:48:34 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:48.619 14:48:34 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:48.619 14:48:34 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:48.619 14:48:34 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:48.619 14:48:34 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:48.619 14:48:34 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:48.619 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:48.619 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:07:48.619 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:48.619 14:48:34 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:48.619 14:48:34 -- target/referrals.sh@82 -- # jq length 00:07:48.619 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:48.619 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:07:48.619 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:48.619 14:48:34 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:48.619 14:48:34 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:48.619 14:48:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:48.619 14:48:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:48.619 14:48:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:48.619 14:48:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:48.619 14:48:34 -- target/referrals.sh@26 -- # sort 00:07:48.877 14:48:34 -- target/referrals.sh@26 -- # echo 00:07:48.877 14:48:34 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:48.877 14:48:34 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:48.877 14:48:34 -- target/referrals.sh@86 -- # nvmftestfini 00:07:48.877 14:48:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:48.877 14:48:34 -- nvmf/common.sh@117 -- # sync 00:07:48.877 14:48:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:48.877 14:48:34 -- nvmf/common.sh@120 -- # set +e 00:07:48.877 14:48:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:48.877 14:48:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:48.877 rmmod nvme_tcp 00:07:48.877 rmmod nvme_fabrics 00:07:48.877 rmmod nvme_keyring 00:07:48.878 14:48:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:48.878 14:48:34 -- nvmf/common.sh@124 -- # set -e 00:07:48.878 14:48:34 -- nvmf/common.sh@125 -- # return 0 00:07:48.878 14:48:34 -- nvmf/common.sh@478 -- # '[' -n 3677330 ']' 00:07:48.878 14:48:34 -- nvmf/common.sh@479 -- # killprocess 3677330 00:07:48.878 14:48:34 -- common/autotest_common.sh@936 -- # '[' -z 3677330 ']' 00:07:48.878 14:48:34 -- common/autotest_common.sh@940 -- # kill -0 3677330 00:07:48.878 14:48:34 -- common/autotest_common.sh@941 -- # uname 00:07:48.878 14:48:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:48.878 14:48:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3677330 00:07:48.878 14:48:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:48.878 14:48:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:48.878 14:48:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3677330' 00:07:48.878 killing process with pid 3677330 00:07:48.878 14:48:34 -- common/autotest_common.sh@955 -- # kill 3677330 00:07:48.878 14:48:34 -- common/autotest_common.sh@960 -- # wait 3677330 00:07:49.135 14:48:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:49.135 14:48:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:49.135 14:48:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:49.135 14:48:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:49.135 14:48:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:49.135 14:48:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.135 14:48:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:49.135 14:48:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.038 14:48:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:51.038 00:07:51.038 real 0m6.262s 00:07:51.038 user 0m8.169s 00:07:51.038 sys 0m1.980s 00:07:51.038 14:48:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:51.038 14:48:36 -- common/autotest_common.sh@10 -- # set +x 00:07:51.038 ************************************ 00:07:51.038 END TEST nvmf_referrals 00:07:51.038 ************************************ 00:07:51.038 14:48:36 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:51.038 14:48:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:51.038 14:48:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.038 14:48:36 -- common/autotest_common.sh@10 -- # set +x 00:07:51.296 ************************************ 00:07:51.296 START TEST nvmf_connect_disconnect 00:07:51.296 ************************************ 00:07:51.296 14:48:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:51.296 * Looking for test storage... 00:07:51.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.296 14:48:36 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.296 14:48:36 -- nvmf/common.sh@7 -- # uname -s 00:07:51.296 14:48:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.296 14:48:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.296 14:48:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.296 14:48:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.296 14:48:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.296 14:48:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.296 14:48:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.296 14:48:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.296 14:48:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.296 14:48:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.296 14:48:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:51.296 14:48:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:51.296 14:48:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.296 14:48:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.296 14:48:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.296 14:48:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.296 14:48:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.296 14:48:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.296 14:48:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.296 14:48:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.296 14:48:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.296 14:48:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.296 14:48:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.296 14:48:36 -- paths/export.sh@5 -- # export PATH 00:07:51.296 14:48:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.296 14:48:36 -- nvmf/common.sh@47 -- # : 0 00:07:51.296 14:48:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.296 14:48:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.296 14:48:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.296 14:48:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.296 14:48:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.296 14:48:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.296 14:48:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.296 14:48:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.296 14:48:36 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:51.296 14:48:36 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:51.296 14:48:36 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:51.296 14:48:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:51.296 14:48:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.296 14:48:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:51.296 14:48:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:51.296 14:48:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:51.296 14:48:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.296 14:48:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.296 14:48:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.296 14:48:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:51.296 14:48:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:51.296 14:48:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:51.296 14:48:36 -- common/autotest_common.sh@10 -- # set +x 00:07:53.825 14:48:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:53.825 14:48:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:53.825 14:48:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:53.825 14:48:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:53.825 14:48:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:53.825 14:48:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:53.825 14:48:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:53.825 14:48:38 -- nvmf/common.sh@295 -- # net_devs=() 00:07:53.825 14:48:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:53.825 14:48:38 -- nvmf/common.sh@296 -- # e810=() 00:07:53.825 14:48:38 -- nvmf/common.sh@296 -- # local -ga e810 00:07:53.825 14:48:38 -- nvmf/common.sh@297 -- # x722=() 00:07:53.825 14:48:38 -- nvmf/common.sh@297 -- # local -ga x722 00:07:53.825 14:48:38 -- nvmf/common.sh@298 -- # mlx=() 00:07:53.825 14:48:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:53.825 14:48:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.825 14:48:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.825 14:48:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.825 14:48:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.825 14:48:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.825 14:48:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.825 14:48:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.825 14:48:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.825 14:48:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.825 14:48:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.825 14:48:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.825 14:48:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:53.825 14:48:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:53.825 14:48:38 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:53.825 14:48:38 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:53.825 14:48:38 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:53.825 14:48:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:53.825 14:48:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:53.825 14:48:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:53.825 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:53.825 14:48:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:53.825 14:48:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:53.825 14:48:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.825 14:48:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.825 14:48:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:53.825 14:48:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:53.825 14:48:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:53.825 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:53.825 14:48:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:53.825 14:48:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:53.825 14:48:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.825 14:48:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.825 14:48:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:53.825 14:48:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:53.825 14:48:38 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:53.825 14:48:38 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:53.825 14:48:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:53.825 14:48:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.825 14:48:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:53.825 14:48:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.825 14:48:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:53.825 Found net devices under 0000:84:00.0: cvl_0_0 00:07:53.825 14:48:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.825 14:48:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:53.825 14:48:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.825 14:48:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:53.825 14:48:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.825 14:48:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:53.825 Found net devices under 0000:84:00.1: cvl_0_1 00:07:53.825 14:48:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.825 14:48:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:53.825 14:48:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:53.825 14:48:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:53.825 14:48:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:53.825 14:48:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:53.825 14:48:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.825 14:48:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.825 14:48:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.825 14:48:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:53.825 14:48:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.825 14:48:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.825 14:48:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:53.825 14:48:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.825 14:48:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.825 14:48:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:53.825 14:48:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:53.825 14:48:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.826 14:48:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.826 14:48:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.826 14:48:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.826 14:48:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:53.826 14:48:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.826 14:48:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.826 14:48:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.826 14:48:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:53.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:07:53.826 00:07:53.826 --- 10.0.0.2 ping statistics --- 00:07:53.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.826 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:07:53.826 14:48:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:07:53.826 00:07:53.826 --- 10.0.0.1 ping statistics --- 00:07:53.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.826 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:07:53.826 14:48:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.826 14:48:39 -- nvmf/common.sh@411 -- # return 0 00:07:53.826 14:48:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:53.826 14:48:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.826 14:48:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:53.826 14:48:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:53.826 14:48:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.826 14:48:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:53.826 14:48:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:53.826 14:48:39 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:53.826 14:48:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:53.826 14:48:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:53.826 14:48:39 -- common/autotest_common.sh@10 -- # set +x 00:07:53.826 14:48:39 -- nvmf/common.sh@470 -- # nvmfpid=3679529 00:07:53.826 14:48:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:53.826 14:48:39 -- nvmf/common.sh@471 -- # waitforlisten 3679529 00:07:53.826 14:48:39 -- common/autotest_common.sh@817 -- # '[' -z 3679529 ']' 00:07:53.826 14:48:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.826 14:48:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:53.826 14:48:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.826 14:48:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:53.826 14:48:39 -- common/autotest_common.sh@10 -- # set +x 00:07:53.826 [2024-04-26 14:48:39.190154] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:07:53.826 [2024-04-26 14:48:39.190232] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.826 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.826 [2024-04-26 14:48:39.229615] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:53.826 [2024-04-26 14:48:39.268714] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.826 [2024-04-26 14:48:39.357899] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.826 [2024-04-26 14:48:39.357960] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.826 [2024-04-26 14:48:39.357994] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.826 [2024-04-26 14:48:39.358042] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.826 [2024-04-26 14:48:39.358062] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.826 [2024-04-26 14:48:39.358124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.826 [2024-04-26 14:48:39.358189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.826 [2024-04-26 14:48:39.358254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.826 [2024-04-26 14:48:39.358263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.826 14:48:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:53.826 14:48:39 -- common/autotest_common.sh@850 -- # return 0 00:07:53.826 14:48:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:53.826 14:48:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:53.826 14:48:39 -- common/autotest_common.sh@10 -- # set +x 00:07:53.826 14:48:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.826 14:48:39 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:53.826 14:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.826 14:48:39 -- common/autotest_common.sh@10 -- # set +x 00:07:53.826 [2024-04-26 14:48:39.516646] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.826 14:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.826 14:48:39 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:53.826 14:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.826 14:48:39 -- common/autotest_common.sh@10 -- # set +x 00:07:53.826 14:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.826 14:48:39 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:53.826 14:48:39 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:53.826 14:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.826 14:48:39 -- common/autotest_common.sh@10 -- # set +x 00:07:53.826 14:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.826 14:48:39 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:53.826 14:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.826 14:48:39 -- common/autotest_common.sh@10 -- # set +x 00:07:54.083 14:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.083 14:48:39 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:54.083 14:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.083 14:48:39 -- common/autotest_common.sh@10 -- # set +x 00:07:54.083 [2024-04-26 14:48:39.573768] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.083 14:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.083 14:48:39 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:54.083 14:48:39 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:54.083 14:48:39 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:54.083 14:48:39 -- target/connect_disconnect.sh@34 -- # set +x 00:07:56.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.482 [2024-04-26 14:51:44.610778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce8410 is same with the state(5) to be set 00:10:59.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.038 14:52:25 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:40.038 14:52:25 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:40.039 14:52:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:40.039 14:52:25 -- nvmf/common.sh@117 -- # sync 00:11:40.039 14:52:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:40.039 14:52:25 -- nvmf/common.sh@120 -- # set +e 00:11:40.039 14:52:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:40.039 14:52:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:40.039 rmmod nvme_tcp 00:11:40.039 rmmod nvme_fabrics 00:11:40.039 rmmod nvme_keyring 00:11:40.039 14:52:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:40.039 14:52:25 -- nvmf/common.sh@124 -- # set -e 00:11:40.039 14:52:25 -- nvmf/common.sh@125 -- # return 0 00:11:40.039 14:52:25 -- nvmf/common.sh@478 -- # '[' -n 3679529 ']' 00:11:40.039 14:52:25 -- nvmf/common.sh@479 -- # killprocess 3679529 00:11:40.039 14:52:25 -- common/autotest_common.sh@936 -- # '[' -z 3679529 ']' 00:11:40.039 14:52:25 -- common/autotest_common.sh@940 -- # kill -0 3679529 00:11:40.039 14:52:25 -- common/autotest_common.sh@941 -- # uname 00:11:40.039 14:52:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:40.039 14:52:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3679529 00:11:40.039 14:52:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:40.039 14:52:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:40.039 14:52:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3679529' 00:11:40.039 killing process with pid 3679529 00:11:40.039 14:52:25 -- common/autotest_common.sh@955 -- # kill 3679529 00:11:40.039 14:52:25 -- common/autotest_common.sh@960 -- # wait 3679529 00:11:40.039 14:52:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:40.039 14:52:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:40.039 14:52:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:40.039 14:52:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:40.039 14:52:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:40.039 14:52:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.039 14:52:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:40.039 14:52:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.604 14:52:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:42.604 00:11:42.604 real 3m50.890s 00:11:42.604 user 14m37.991s 00:11:42.604 sys 0m31.903s 00:11:42.604 14:52:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:42.604 14:52:27 -- common/autotest_common.sh@10 -- # set +x 00:11:42.604 ************************************ 00:11:42.604 END TEST nvmf_connect_disconnect 00:11:42.604 ************************************ 00:11:42.604 14:52:27 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:42.604 14:52:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:42.604 14:52:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:42.604 14:52:27 -- common/autotest_common.sh@10 -- # set +x 00:11:42.604 ************************************ 00:11:42.604 START TEST nvmf_multitarget 00:11:42.604 ************************************ 00:11:42.604 14:52:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:42.604 * Looking for test storage... 00:11:42.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.604 14:52:27 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.604 14:52:27 -- nvmf/common.sh@7 -- # uname -s 00:11:42.604 14:52:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.604 14:52:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.604 14:52:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.604 14:52:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.604 14:52:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.604 14:52:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.604 14:52:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.604 14:52:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.604 14:52:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.604 14:52:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.604 14:52:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:42.604 14:52:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:42.604 14:52:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.604 14:52:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.604 14:52:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.604 14:52:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.604 14:52:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.604 14:52:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.604 14:52:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.604 14:52:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.604 14:52:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.604 14:52:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.604 14:52:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.604 14:52:27 -- paths/export.sh@5 -- # export PATH 00:11:42.604 14:52:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.604 14:52:27 -- nvmf/common.sh@47 -- # : 0 00:11:42.604 14:52:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:42.604 14:52:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:42.604 14:52:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.604 14:52:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.604 14:52:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.604 14:52:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:42.604 14:52:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:42.604 14:52:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:42.604 14:52:27 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:42.604 14:52:27 -- target/multitarget.sh@15 -- # nvmftestinit 00:11:42.604 14:52:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:42.604 14:52:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.604 14:52:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:42.604 14:52:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:42.604 14:52:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:42.604 14:52:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.604 14:52:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:42.604 14:52:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.604 14:52:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:42.604 14:52:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:42.604 14:52:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:42.604 14:52:27 -- common/autotest_common.sh@10 -- # set +x 00:11:44.510 14:52:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:44.510 14:52:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:44.510 14:52:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:44.510 14:52:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:44.510 14:52:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:44.510 14:52:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:44.510 14:52:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:44.510 14:52:30 -- nvmf/common.sh@295 -- # net_devs=() 00:11:44.510 14:52:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:44.510 14:52:30 -- nvmf/common.sh@296 -- # e810=() 00:11:44.510 14:52:30 -- nvmf/common.sh@296 -- # local -ga e810 00:11:44.510 14:52:30 -- nvmf/common.sh@297 -- # x722=() 00:11:44.510 14:52:30 -- nvmf/common.sh@297 -- # local -ga x722 00:11:44.510 14:52:30 -- nvmf/common.sh@298 -- # mlx=() 00:11:44.510 14:52:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:44.510 14:52:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.510 14:52:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.510 14:52:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.510 14:52:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.510 14:52:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.510 14:52:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.510 14:52:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.511 14:52:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.511 14:52:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.511 14:52:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.511 14:52:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.511 14:52:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:44.511 14:52:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:44.511 14:52:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:44.511 14:52:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:44.511 14:52:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:44.511 14:52:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:44.511 14:52:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:44.511 14:52:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:44.511 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:44.511 14:52:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:44.511 14:52:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:44.511 14:52:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.511 14:52:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.511 14:52:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:44.511 14:52:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:44.511 14:52:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:44.511 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:44.511 14:52:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:44.511 14:52:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:44.511 14:52:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.511 14:52:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.511 14:52:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:44.511 14:52:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:44.511 14:52:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:44.511 14:52:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:44.511 14:52:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:44.511 14:52:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.511 14:52:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:44.511 14:52:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.511 14:52:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:44.511 Found net devices under 0000:84:00.0: cvl_0_0 00:11:44.511 14:52:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.511 14:52:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:44.511 14:52:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.511 14:52:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:44.511 14:52:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.511 14:52:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:44.511 Found net devices under 0000:84:00.1: cvl_0_1 00:11:44.511 14:52:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.511 14:52:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:44.511 14:52:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:44.511 14:52:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:44.511 14:52:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:44.511 14:52:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:44.511 14:52:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.511 14:52:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.511 14:52:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:44.511 14:52:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:44.511 14:52:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:44.511 14:52:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:44.511 14:52:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:44.511 14:52:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:44.511 14:52:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.511 14:52:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:44.511 14:52:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:44.511 14:52:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:44.511 14:52:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.511 14:52:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.511 14:52:30 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.511 14:52:30 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:44.511 14:52:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.511 14:52:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.511 14:52:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.511 14:52:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:44.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:11:44.511 00:11:44.511 --- 10.0.0.2 ping statistics --- 00:11:44.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.511 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:11:44.511 14:52:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:11:44.511 00:11:44.511 --- 10.0.0.1 ping statistics --- 00:11:44.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.511 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:11:44.511 14:52:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.511 14:52:30 -- nvmf/common.sh@411 -- # return 0 00:11:44.511 14:52:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:44.511 14:52:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.511 14:52:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:44.511 14:52:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:44.511 14:52:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.511 14:52:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:44.511 14:52:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:44.511 14:52:30 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:44.511 14:52:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:44.511 14:52:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:44.511 14:52:30 -- common/autotest_common.sh@10 -- # set +x 00:11:44.511 14:52:30 -- nvmf/common.sh@470 -- # nvmfpid=3710122 00:11:44.511 14:52:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.511 14:52:30 -- nvmf/common.sh@471 -- # waitforlisten 3710122 00:11:44.511 14:52:30 -- common/autotest_common.sh@817 -- # '[' -z 3710122 ']' 00:11:44.511 14:52:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.511 14:52:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:44.511 14:52:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.511 14:52:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:44.511 14:52:30 -- common/autotest_common.sh@10 -- # set +x 00:11:44.511 [2024-04-26 14:52:30.241735] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:11:44.511 [2024-04-26 14:52:30.241822] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.770 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.770 [2024-04-26 14:52:30.282006] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:44.770 [2024-04-26 14:52:30.314736] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.770 [2024-04-26 14:52:30.407980] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.770 [2024-04-26 14:52:30.408048] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.770 [2024-04-26 14:52:30.408077] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.770 [2024-04-26 14:52:30.408091] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.770 [2024-04-26 14:52:30.408104] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.770 [2024-04-26 14:52:30.408176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.770 [2024-04-26 14:52:30.408240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.770 [2024-04-26 14:52:30.408293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.770 [2024-04-26 14:52:30.408296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.027 14:52:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:45.027 14:52:30 -- common/autotest_common.sh@850 -- # return 0 00:11:45.027 14:52:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:45.027 14:52:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:45.027 14:52:30 -- common/autotest_common.sh@10 -- # set +x 00:11:45.027 14:52:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:45.027 14:52:30 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:45.027 14:52:30 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:45.027 14:52:30 -- target/multitarget.sh@21 -- # jq length 00:11:45.027 14:52:30 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:45.027 14:52:30 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:45.285 "nvmf_tgt_1" 00:11:45.285 14:52:30 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:45.285 "nvmf_tgt_2" 00:11:45.285 14:52:30 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:45.285 14:52:30 -- target/multitarget.sh@28 -- # jq length 00:11:45.542 14:52:31 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:45.542 14:52:31 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:45.542 true 00:11:45.542 14:52:31 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:45.542 true 00:11:45.542 14:52:31 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:45.542 14:52:31 -- target/multitarget.sh@35 -- # jq length 00:11:45.800 14:52:31 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:45.800 14:52:31 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:45.800 14:52:31 -- target/multitarget.sh@41 -- # nvmftestfini 00:11:45.800 14:52:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:45.800 14:52:31 -- nvmf/common.sh@117 -- # sync 00:11:45.801 14:52:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:45.801 14:52:31 -- nvmf/common.sh@120 -- # set +e 00:11:45.801 14:52:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:45.801 14:52:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:45.801 rmmod nvme_tcp 00:11:45.801 rmmod nvme_fabrics 00:11:45.801 rmmod nvme_keyring 00:11:45.801 14:52:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:45.801 14:52:31 -- nvmf/common.sh@124 -- # set -e 00:11:45.801 14:52:31 -- nvmf/common.sh@125 -- # return 0 00:11:45.801 14:52:31 -- nvmf/common.sh@478 -- # '[' -n 3710122 ']' 00:11:45.801 14:52:31 -- nvmf/common.sh@479 -- # killprocess 3710122 00:11:45.801 14:52:31 -- common/autotest_common.sh@936 -- # '[' -z 3710122 ']' 00:11:45.801 14:52:31 -- common/autotest_common.sh@940 -- # kill -0 3710122 00:11:45.801 14:52:31 -- common/autotest_common.sh@941 -- # uname 00:11:45.801 14:52:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:45.801 14:52:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3710122 00:11:45.801 14:52:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:45.801 14:52:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:45.801 14:52:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3710122' 00:11:45.801 killing process with pid 3710122 00:11:45.801 14:52:31 -- common/autotest_common.sh@955 -- # kill 3710122 00:11:45.801 14:52:31 -- common/autotest_common.sh@960 -- # wait 3710122 00:11:46.060 14:52:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:46.060 14:52:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:46.060 14:52:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:46.060 14:52:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:46.060 14:52:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:46.060 14:52:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.060 14:52:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.060 14:52:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.595 14:52:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:48.595 00:11:48.595 real 0m5.862s 00:11:48.595 user 0m6.503s 00:11:48.595 sys 0m2.012s 00:11:48.595 14:52:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:48.595 14:52:33 -- common/autotest_common.sh@10 -- # set +x 00:11:48.595 ************************************ 00:11:48.595 END TEST nvmf_multitarget 00:11:48.595 ************************************ 00:11:48.595 14:52:33 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:48.595 14:52:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:48.595 14:52:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:48.595 14:52:33 -- common/autotest_common.sh@10 -- # set +x 00:11:48.595 ************************************ 00:11:48.595 START TEST nvmf_rpc 00:11:48.595 ************************************ 00:11:48.595 14:52:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:48.595 * Looking for test storage... 00:11:48.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.595 14:52:33 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.595 14:52:33 -- nvmf/common.sh@7 -- # uname -s 00:11:48.595 14:52:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.595 14:52:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.595 14:52:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.595 14:52:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.595 14:52:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.595 14:52:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.595 14:52:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.595 14:52:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.595 14:52:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.595 14:52:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.595 14:52:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:48.595 14:52:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:48.595 14:52:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.595 14:52:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.595 14:52:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.595 14:52:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.595 14:52:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.595 14:52:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.595 14:52:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.595 14:52:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.595 14:52:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.595 14:52:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.595 14:52:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.595 14:52:33 -- paths/export.sh@5 -- # export PATH 00:11:48.595 14:52:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.595 14:52:33 -- nvmf/common.sh@47 -- # : 0 00:11:48.595 14:52:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:48.595 14:52:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:48.595 14:52:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.595 14:52:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.595 14:52:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.595 14:52:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:48.595 14:52:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:48.595 14:52:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:48.595 14:52:33 -- target/rpc.sh@11 -- # loops=5 00:11:48.595 14:52:33 -- target/rpc.sh@23 -- # nvmftestinit 00:11:48.595 14:52:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:48.595 14:52:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.595 14:52:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:48.595 14:52:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:48.595 14:52:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:48.595 14:52:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.595 14:52:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.595 14:52:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.595 14:52:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:48.595 14:52:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:48.595 14:52:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:48.595 14:52:33 -- common/autotest_common.sh@10 -- # set +x 00:11:50.494 14:52:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:50.494 14:52:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:50.494 14:52:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:50.494 14:52:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:50.494 14:52:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:50.494 14:52:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:50.494 14:52:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:50.494 14:52:35 -- nvmf/common.sh@295 -- # net_devs=() 00:11:50.494 14:52:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:50.494 14:52:35 -- nvmf/common.sh@296 -- # e810=() 00:11:50.494 14:52:35 -- nvmf/common.sh@296 -- # local -ga e810 00:11:50.494 14:52:35 -- nvmf/common.sh@297 -- # x722=() 00:11:50.494 14:52:35 -- nvmf/common.sh@297 -- # local -ga x722 00:11:50.494 14:52:35 -- nvmf/common.sh@298 -- # mlx=() 00:11:50.494 14:52:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:50.494 14:52:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.494 14:52:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.494 14:52:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.494 14:52:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.494 14:52:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.494 14:52:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.494 14:52:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.494 14:52:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.494 14:52:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.494 14:52:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.494 14:52:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.494 14:52:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:50.494 14:52:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:50.494 14:52:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:50.494 14:52:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:50.494 14:52:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:50.494 14:52:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:50.494 14:52:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:50.494 14:52:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:50.494 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:50.494 14:52:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:50.494 14:52:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:50.494 14:52:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.494 14:52:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.494 14:52:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:50.494 14:52:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:50.494 14:52:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:50.494 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:50.494 14:52:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:50.494 14:52:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:50.494 14:52:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.494 14:52:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.494 14:52:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:50.494 14:52:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:50.494 14:52:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:50.494 14:52:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:50.494 14:52:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:50.494 14:52:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.494 14:52:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:50.494 14:52:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.494 14:52:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:50.494 Found net devices under 0000:84:00.0: cvl_0_0 00:11:50.494 14:52:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.494 14:52:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:50.494 14:52:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.494 14:52:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:50.494 14:52:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.494 14:52:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:50.494 Found net devices under 0000:84:00.1: cvl_0_1 00:11:50.494 14:52:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.494 14:52:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:50.494 14:52:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:50.494 14:52:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:50.494 14:52:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:50.494 14:52:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:50.494 14:52:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.494 14:52:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.494 14:52:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.494 14:52:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:50.494 14:52:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.494 14:52:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.494 14:52:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:50.494 14:52:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.494 14:52:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.494 14:52:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:50.494 14:52:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:50.494 14:52:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.494 14:52:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.494 14:52:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.494 14:52:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.494 14:52:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:50.494 14:52:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.494 14:52:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.495 14:52:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.495 14:52:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:50.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:11:50.495 00:11:50.495 --- 10.0.0.2 ping statistics --- 00:11:50.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.495 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:11:50.495 14:52:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:11:50.495 00:11:50.495 --- 10.0.0.1 ping statistics --- 00:11:50.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.495 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:11:50.495 14:52:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.495 14:52:35 -- nvmf/common.sh@411 -- # return 0 00:11:50.495 14:52:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:50.495 14:52:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.495 14:52:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:50.495 14:52:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:50.495 14:52:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.495 14:52:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:50.495 14:52:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:50.495 14:52:35 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:50.495 14:52:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:50.495 14:52:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:50.495 14:52:35 -- common/autotest_common.sh@10 -- # set +x 00:11:50.495 14:52:35 -- nvmf/common.sh@470 -- # nvmfpid=3712261 00:11:50.495 14:52:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:50.495 14:52:35 -- nvmf/common.sh@471 -- # waitforlisten 3712261 00:11:50.495 14:52:35 -- common/autotest_common.sh@817 -- # '[' -z 3712261 ']' 00:11:50.495 14:52:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.495 14:52:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:50.495 14:52:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.495 14:52:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:50.495 14:52:35 -- common/autotest_common.sh@10 -- # set +x 00:11:50.495 [2024-04-26 14:52:36.002778] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:11:50.495 [2024-04-26 14:52:36.002856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.495 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.495 [2024-04-26 14:52:36.039467] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:50.495 [2024-04-26 14:52:36.069354] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.495 [2024-04-26 14:52:36.161477] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.495 [2024-04-26 14:52:36.161544] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.495 [2024-04-26 14:52:36.161569] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.495 [2024-04-26 14:52:36.161590] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.495 [2024-04-26 14:52:36.161608] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.495 [2024-04-26 14:52:36.161717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.495 [2024-04-26 14:52:36.161779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.495 [2024-04-26 14:52:36.161833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.495 [2024-04-26 14:52:36.161838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.752 14:52:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:50.752 14:52:36 -- common/autotest_common.sh@850 -- # return 0 00:11:50.752 14:52:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:50.752 14:52:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:50.752 14:52:36 -- common/autotest_common.sh@10 -- # set +x 00:11:50.752 14:52:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.752 14:52:36 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:50.752 14:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:50.752 14:52:36 -- common/autotest_common.sh@10 -- # set +x 00:11:50.752 14:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:50.752 14:52:36 -- target/rpc.sh@26 -- # stats='{ 00:11:50.752 "tick_rate": 2700000000, 00:11:50.752 "poll_groups": [ 00:11:50.752 { 00:11:50.752 "name": "nvmf_tgt_poll_group_0", 00:11:50.752 "admin_qpairs": 0, 00:11:50.752 "io_qpairs": 0, 00:11:50.752 "current_admin_qpairs": 0, 00:11:50.752 "current_io_qpairs": 0, 00:11:50.752 "pending_bdev_io": 0, 00:11:50.752 "completed_nvme_io": 0, 00:11:50.752 "transports": [] 00:11:50.752 }, 00:11:50.752 { 00:11:50.752 "name": "nvmf_tgt_poll_group_1", 00:11:50.752 "admin_qpairs": 0, 00:11:50.752 "io_qpairs": 0, 00:11:50.753 "current_admin_qpairs": 0, 00:11:50.753 "current_io_qpairs": 0, 00:11:50.753 "pending_bdev_io": 0, 00:11:50.753 "completed_nvme_io": 0, 00:11:50.753 "transports": [] 00:11:50.753 }, 00:11:50.753 { 00:11:50.753 "name": "nvmf_tgt_poll_group_2", 00:11:50.753 "admin_qpairs": 0, 00:11:50.753 "io_qpairs": 0, 00:11:50.753 "current_admin_qpairs": 0, 00:11:50.753 "current_io_qpairs": 0, 00:11:50.753 "pending_bdev_io": 0, 00:11:50.753 "completed_nvme_io": 0, 00:11:50.753 "transports": [] 00:11:50.753 }, 00:11:50.753 { 00:11:50.753 "name": "nvmf_tgt_poll_group_3", 00:11:50.753 "admin_qpairs": 0, 00:11:50.753 "io_qpairs": 0, 00:11:50.753 "current_admin_qpairs": 0, 00:11:50.753 "current_io_qpairs": 0, 00:11:50.753 "pending_bdev_io": 0, 00:11:50.753 "completed_nvme_io": 0, 00:11:50.753 "transports": [] 00:11:50.753 } 00:11:50.753 ] 00:11:50.753 }' 00:11:50.753 14:52:36 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:50.753 14:52:36 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:50.753 14:52:36 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:50.753 14:52:36 -- target/rpc.sh@15 -- # wc -l 00:11:50.753 14:52:36 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:50.753 14:52:36 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:50.753 14:52:36 -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:50.753 14:52:36 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:50.753 14:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:50.753 14:52:36 -- common/autotest_common.sh@10 -- # set +x 00:11:50.753 [2024-04-26 14:52:36.415189] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.753 14:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:50.753 14:52:36 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:50.753 14:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:50.753 14:52:36 -- common/autotest_common.sh@10 -- # set +x 00:11:50.753 14:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:50.753 14:52:36 -- target/rpc.sh@33 -- # stats='{ 00:11:50.753 "tick_rate": 2700000000, 00:11:50.753 "poll_groups": [ 00:11:50.753 { 00:11:50.753 "name": "nvmf_tgt_poll_group_0", 00:11:50.753 "admin_qpairs": 0, 00:11:50.753 "io_qpairs": 0, 00:11:50.753 "current_admin_qpairs": 0, 00:11:50.753 "current_io_qpairs": 0, 00:11:50.753 "pending_bdev_io": 0, 00:11:50.753 "completed_nvme_io": 0, 00:11:50.753 "transports": [ 00:11:50.753 { 00:11:50.753 "trtype": "TCP" 00:11:50.753 } 00:11:50.753 ] 00:11:50.753 }, 00:11:50.753 { 00:11:50.753 "name": "nvmf_tgt_poll_group_1", 00:11:50.753 "admin_qpairs": 0, 00:11:50.753 "io_qpairs": 0, 00:11:50.753 "current_admin_qpairs": 0, 00:11:50.753 "current_io_qpairs": 0, 00:11:50.753 "pending_bdev_io": 0, 00:11:50.753 "completed_nvme_io": 0, 00:11:50.753 "transports": [ 00:11:50.753 { 00:11:50.753 "trtype": "TCP" 00:11:50.753 } 00:11:50.753 ] 00:11:50.753 }, 00:11:50.753 { 00:11:50.753 "name": "nvmf_tgt_poll_group_2", 00:11:50.753 "admin_qpairs": 0, 00:11:50.753 "io_qpairs": 0, 00:11:50.753 "current_admin_qpairs": 0, 00:11:50.753 "current_io_qpairs": 0, 00:11:50.753 "pending_bdev_io": 0, 00:11:50.753 "completed_nvme_io": 0, 00:11:50.753 "transports": [ 00:11:50.753 { 00:11:50.753 "trtype": "TCP" 00:11:50.753 } 00:11:50.753 ] 00:11:50.753 }, 00:11:50.753 { 00:11:50.753 "name": "nvmf_tgt_poll_group_3", 00:11:50.753 "admin_qpairs": 0, 00:11:50.753 "io_qpairs": 0, 00:11:50.753 "current_admin_qpairs": 0, 00:11:50.753 "current_io_qpairs": 0, 00:11:50.753 "pending_bdev_io": 0, 00:11:50.753 "completed_nvme_io": 0, 00:11:50.753 "transports": [ 00:11:50.753 { 00:11:50.753 "trtype": "TCP" 00:11:50.753 } 00:11:50.753 ] 00:11:50.753 } 00:11:50.753 ] 00:11:50.753 }' 00:11:50.753 14:52:36 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:50.753 14:52:36 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:50.753 14:52:36 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:50.753 14:52:36 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:50.753 14:52:36 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:50.753 14:52:36 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:50.753 14:52:36 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:50.753 14:52:36 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:50.753 14:52:36 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:51.010 14:52:36 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:51.010 14:52:36 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:51.010 14:52:36 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:51.010 14:52:36 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:51.010 14:52:36 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:51.010 14:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.010 14:52:36 -- common/autotest_common.sh@10 -- # set +x 00:11:51.010 Malloc1 00:11:51.010 14:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.010 14:52:36 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:51.010 14:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.010 14:52:36 -- common/autotest_common.sh@10 -- # set +x 00:11:51.010 14:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.010 14:52:36 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:51.010 14:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.010 14:52:36 -- common/autotest_common.sh@10 -- # set +x 00:11:51.010 14:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.010 14:52:36 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:51.010 14:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.010 14:52:36 -- common/autotest_common.sh@10 -- # set +x 00:11:51.010 14:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.010 14:52:36 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.010 14:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.010 14:52:36 -- common/autotest_common.sh@10 -- # set +x 00:11:51.010 [2024-04-26 14:52:36.576993] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.010 14:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.010 14:52:36 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:11:51.010 14:52:36 -- common/autotest_common.sh@638 -- # local es=0 00:11:51.010 14:52:36 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:11:51.010 14:52:36 -- common/autotest_common.sh@626 -- # local arg=nvme 00:11:51.010 14:52:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:51.010 14:52:36 -- common/autotest_common.sh@630 -- # type -t nvme 00:11:51.010 14:52:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:51.010 14:52:36 -- common/autotest_common.sh@632 -- # type -P nvme 00:11:51.010 14:52:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:51.010 14:52:36 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:11:51.010 14:52:36 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:11:51.010 14:52:36 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:11:51.010 [2024-04-26 14:52:36.599509] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:11:51.010 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:51.010 could not add new controller: failed to write to nvme-fabrics device 00:11:51.010 14:52:36 -- common/autotest_common.sh@641 -- # es=1 00:11:51.010 14:52:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:51.010 14:52:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:51.010 14:52:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:51.010 14:52:36 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:51.010 14:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.010 14:52:36 -- common/autotest_common.sh@10 -- # set +x 00:11:51.010 14:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.010 14:52:36 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.577 14:52:37 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.577 14:52:37 -- common/autotest_common.sh@1184 -- # local i=0 00:11:51.577 14:52:37 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.577 14:52:37 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:51.577 14:52:37 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:54.100 14:52:39 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:54.100 14:52:39 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:54.100 14:52:39 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.100 14:52:39 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:54.100 14:52:39 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.100 14:52:39 -- common/autotest_common.sh@1194 -- # return 0 00:11:54.100 14:52:39 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.100 14:52:39 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.100 14:52:39 -- common/autotest_common.sh@1205 -- # local i=0 00:11:54.100 14:52:39 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:54.100 14:52:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.100 14:52:39 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:54.100 14:52:39 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.100 14:52:39 -- common/autotest_common.sh@1217 -- # return 0 00:11:54.100 14:52:39 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:54.100 14:52:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.100 14:52:39 -- common/autotest_common.sh@10 -- # set +x 00:11:54.100 14:52:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.100 14:52:39 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.100 14:52:39 -- common/autotest_common.sh@638 -- # local es=0 00:11:54.100 14:52:39 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.100 14:52:39 -- common/autotest_common.sh@626 -- # local arg=nvme 00:11:54.100 14:52:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:54.100 14:52:39 -- common/autotest_common.sh@630 -- # type -t nvme 00:11:54.100 14:52:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:54.100 14:52:39 -- common/autotest_common.sh@632 -- # type -P nvme 00:11:54.100 14:52:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:54.100 14:52:39 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:11:54.100 14:52:39 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:11:54.100 14:52:39 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.100 [2024-04-26 14:52:39.388418] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:11:54.100 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:54.100 could not add new controller: failed to write to nvme-fabrics device 00:11:54.100 14:52:39 -- common/autotest_common.sh@641 -- # es=1 00:11:54.100 14:52:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:54.100 14:52:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:54.100 14:52:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:54.100 14:52:39 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:54.100 14:52:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.100 14:52:39 -- common/autotest_common.sh@10 -- # set +x 00:11:54.100 14:52:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.100 14:52:39 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.358 14:52:39 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.358 14:52:39 -- common/autotest_common.sh@1184 -- # local i=0 00:11:54.358 14:52:39 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.358 14:52:39 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:54.358 14:52:39 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:56.255 14:52:41 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:56.255 14:52:41 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:56.255 14:52:41 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.255 14:52:41 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:56.255 14:52:41 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.255 14:52:41 -- common/autotest_common.sh@1194 -- # return 0 00:11:56.255 14:52:41 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.512 14:52:42 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.512 14:52:42 -- common/autotest_common.sh@1205 -- # local i=0 00:11:56.512 14:52:42 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:56.512 14:52:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.512 14:52:42 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:56.512 14:52:42 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.512 14:52:42 -- common/autotest_common.sh@1217 -- # return 0 00:11:56.512 14:52:42 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.512 14:52:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:56.512 14:52:42 -- common/autotest_common.sh@10 -- # set +x 00:11:56.512 14:52:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:56.512 14:52:42 -- target/rpc.sh@81 -- # seq 1 5 00:11:56.512 14:52:42 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:56.512 14:52:42 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.512 14:52:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:56.512 14:52:42 -- common/autotest_common.sh@10 -- # set +x 00:11:56.512 14:52:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:56.512 14:52:42 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.512 14:52:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:56.512 14:52:42 -- common/autotest_common.sh@10 -- # set +x 00:11:56.512 [2024-04-26 14:52:42.098144] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.512 14:52:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:56.512 14:52:42 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:56.512 14:52:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:56.512 14:52:42 -- common/autotest_common.sh@10 -- # set +x 00:11:56.512 14:52:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:56.512 14:52:42 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.512 14:52:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:56.512 14:52:42 -- common/autotest_common.sh@10 -- # set +x 00:11:56.512 14:52:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:56.512 14:52:42 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:57.077 14:52:42 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.077 14:52:42 -- common/autotest_common.sh@1184 -- # local i=0 00:11:57.077 14:52:42 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.077 14:52:42 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:57.077 14:52:42 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:59.010 14:52:44 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:59.010 14:52:44 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:59.010 14:52:44 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.010 14:52:44 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:59.010 14:52:44 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.010 14:52:44 -- common/autotest_common.sh@1194 -- # return 0 00:11:59.010 14:52:44 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.268 14:52:44 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.268 14:52:44 -- common/autotest_common.sh@1205 -- # local i=0 00:11:59.268 14:52:44 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:59.268 14:52:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.268 14:52:44 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:59.268 14:52:44 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.268 14:52:44 -- common/autotest_common.sh@1217 -- # return 0 00:11:59.268 14:52:44 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.268 14:52:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:59.268 14:52:44 -- common/autotest_common.sh@10 -- # set +x 00:11:59.268 14:52:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:59.268 14:52:44 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.268 14:52:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:59.268 14:52:44 -- common/autotest_common.sh@10 -- # set +x 00:11:59.268 14:52:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:59.268 14:52:44 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:59.268 14:52:44 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.268 14:52:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:59.268 14:52:44 -- common/autotest_common.sh@10 -- # set +x 00:11:59.268 14:52:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:59.268 14:52:44 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.268 14:52:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:59.268 14:52:44 -- common/autotest_common.sh@10 -- # set +x 00:11:59.268 [2024-04-26 14:52:44.861202] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.268 14:52:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:59.268 14:52:44 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:59.268 14:52:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:59.268 14:52:44 -- common/autotest_common.sh@10 -- # set +x 00:11:59.268 14:52:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:59.268 14:52:44 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.268 14:52:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:59.268 14:52:44 -- common/autotest_common.sh@10 -- # set +x 00:11:59.268 14:52:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:59.268 14:52:44 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.833 14:52:45 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:59.833 14:52:45 -- common/autotest_common.sh@1184 -- # local i=0 00:11:59.833 14:52:45 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.833 14:52:45 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:59.833 14:52:45 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:01.730 14:52:47 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:01.730 14:52:47 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:01.730 14:52:47 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.730 14:52:47 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:01.730 14:52:47 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.730 14:52:47 -- common/autotest_common.sh@1194 -- # return 0 00:12:01.730 14:52:47 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.988 14:52:47 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:01.988 14:52:47 -- common/autotest_common.sh@1205 -- # local i=0 00:12:01.988 14:52:47 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:01.988 14:52:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.988 14:52:47 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:01.988 14:52:47 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.988 14:52:47 -- common/autotest_common.sh@1217 -- # return 0 00:12:01.988 14:52:47 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:01.988 14:52:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.988 14:52:47 -- common/autotest_common.sh@10 -- # set +x 00:12:01.988 14:52:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.988 14:52:47 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.988 14:52:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.988 14:52:47 -- common/autotest_common.sh@10 -- # set +x 00:12:01.988 14:52:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.988 14:52:47 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:01.988 14:52:47 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.988 14:52:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.988 14:52:47 -- common/autotest_common.sh@10 -- # set +x 00:12:01.988 14:52:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.988 14:52:47 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.988 14:52:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.988 14:52:47 -- common/autotest_common.sh@10 -- # set +x 00:12:01.988 [2024-04-26 14:52:47.551321] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.988 14:52:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.988 14:52:47 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:01.988 14:52:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.988 14:52:47 -- common/autotest_common.sh@10 -- # set +x 00:12:01.988 14:52:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.988 14:52:47 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.988 14:52:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.988 14:52:47 -- common/autotest_common.sh@10 -- # set +x 00:12:01.988 14:52:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.988 14:52:47 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:02.553 14:52:48 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:02.553 14:52:48 -- common/autotest_common.sh@1184 -- # local i=0 00:12:02.553 14:52:48 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.553 14:52:48 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:02.553 14:52:48 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:05.076 14:52:50 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:05.076 14:52:50 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:05.076 14:52:50 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.076 14:52:50 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:05.077 14:52:50 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.077 14:52:50 -- common/autotest_common.sh@1194 -- # return 0 00:12:05.077 14:52:50 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:05.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.077 14:52:50 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:05.077 14:52:50 -- common/autotest_common.sh@1205 -- # local i=0 00:12:05.077 14:52:50 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:05.077 14:52:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.077 14:52:50 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:05.077 14:52:50 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.077 14:52:50 -- common/autotest_common.sh@1217 -- # return 0 00:12:05.077 14:52:50 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:05.077 14:52:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.077 14:52:50 -- common/autotest_common.sh@10 -- # set +x 00:12:05.077 14:52:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.077 14:52:50 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.077 14:52:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.077 14:52:50 -- common/autotest_common.sh@10 -- # set +x 00:12:05.077 14:52:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.077 14:52:50 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:05.077 14:52:50 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:05.077 14:52:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.077 14:52:50 -- common/autotest_common.sh@10 -- # set +x 00:12:05.077 14:52:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.077 14:52:50 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.077 14:52:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.077 14:52:50 -- common/autotest_common.sh@10 -- # set +x 00:12:05.077 [2024-04-26 14:52:50.372265] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.077 14:52:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.077 14:52:50 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:05.077 14:52:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.077 14:52:50 -- common/autotest_common.sh@10 -- # set +x 00:12:05.077 14:52:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.077 14:52:50 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:05.077 14:52:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.077 14:52:50 -- common/autotest_common.sh@10 -- # set +x 00:12:05.077 14:52:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.077 14:52:50 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:05.334 14:52:50 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:05.334 14:52:50 -- common/autotest_common.sh@1184 -- # local i=0 00:12:05.334 14:52:50 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:05.334 14:52:50 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:05.334 14:52:50 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:07.224 14:52:52 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:07.224 14:52:52 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:07.224 14:52:52 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.224 14:52:52 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:07.224 14:52:52 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.482 14:52:52 -- common/autotest_common.sh@1194 -- # return 0 00:12:07.482 14:52:52 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.482 14:52:53 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.482 14:52:53 -- common/autotest_common.sh@1205 -- # local i=0 00:12:07.482 14:52:53 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:07.482 14:52:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.482 14:52:53 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:07.482 14:52:53 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.482 14:52:53 -- common/autotest_common.sh@1217 -- # return 0 00:12:07.482 14:52:53 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:07.482 14:52:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.482 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:12:07.482 14:52:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.482 14:52:53 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.482 14:52:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.482 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:12:07.482 14:52:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.482 14:52:53 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:07.482 14:52:53 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:07.482 14:52:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.482 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:12:07.482 14:52:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.482 14:52:53 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.482 14:52:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.482 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:12:07.482 [2024-04-26 14:52:53.056656] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.482 14:52:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.482 14:52:53 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:07.482 14:52:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.482 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:12:07.482 14:52:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.482 14:52:53 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:07.482 14:52:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.482 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:12:07.482 14:52:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.482 14:52:53 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.045 14:52:53 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:08.045 14:52:53 -- common/autotest_common.sh@1184 -- # local i=0 00:12:08.045 14:52:53 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.045 14:52:53 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:08.045 14:52:53 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:09.939 14:52:55 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:09.939 14:52:55 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:09.939 14:52:55 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.939 14:52:55 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:09.939 14:52:55 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.939 14:52:55 -- common/autotest_common.sh@1194 -- # return 0 00:12:09.939 14:52:55 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.197 14:52:55 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.197 14:52:55 -- common/autotest_common.sh@1205 -- # local i=0 00:12:10.197 14:52:55 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:10.197 14:52:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.197 14:52:55 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:10.197 14:52:55 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.197 14:52:55 -- common/autotest_common.sh@1217 -- # return 0 00:12:10.197 14:52:55 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:10.197 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.197 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.197 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.197 14:52:55 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.197 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.197 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.197 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.197 14:52:55 -- target/rpc.sh@99 -- # seq 1 5 00:12:10.197 14:52:55 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:10.197 14:52:55 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.197 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.197 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.197 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.197 14:52:55 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.197 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.197 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.197 [2024-04-26 14:52:55.829426] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.197 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.197 14:52:55 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:10.197 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.197 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.197 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.197 14:52:55 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.197 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.197 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.197 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.197 14:52:55 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.197 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.197 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.197 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.197 14:52:55 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.197 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.197 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.197 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.197 14:52:55 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:10.197 14:52:55 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.197 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.197 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.197 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.197 14:52:55 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.197 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.197 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.197 [2024-04-26 14:52:55.877486] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.197 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.197 14:52:55 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:10.197 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.198 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.198 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.198 14:52:55 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.198 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.198 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.198 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.198 14:52:55 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.198 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.198 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.198 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.198 14:52:55 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.198 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.198 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.198 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.198 14:52:55 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:10.198 14:52:55 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.198 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.198 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.198 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.198 14:52:55 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.198 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.198 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.198 [2024-04-26 14:52:55.925641] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.198 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.198 14:52:55 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:10.198 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.198 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.198 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.456 14:52:55 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.456 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.456 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.456 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.456 14:52:55 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.456 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.456 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.456 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.456 14:52:55 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.456 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.456 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.456 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.456 14:52:55 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:10.456 14:52:55 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.456 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.456 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.456 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.456 14:52:55 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.456 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.456 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.456 [2024-04-26 14:52:55.973815] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.456 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.456 14:52:55 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:10.456 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.456 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.456 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.456 14:52:55 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.456 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.456 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.456 14:52:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.456 14:52:55 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.456 14:52:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.456 14:52:55 -- common/autotest_common.sh@10 -- # set +x 00:12:10.456 14:52:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.456 14:52:56 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.456 14:52:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.456 14:52:56 -- common/autotest_common.sh@10 -- # set +x 00:12:10.456 14:52:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.456 14:52:56 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:10.456 14:52:56 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.456 14:52:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.456 14:52:56 -- common/autotest_common.sh@10 -- # set +x 00:12:10.456 14:52:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.456 14:52:56 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.456 14:52:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.456 14:52:56 -- common/autotest_common.sh@10 -- # set +x 00:12:10.456 [2024-04-26 14:52:56.021981] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.456 14:52:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.456 14:52:56 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:10.456 14:52:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.456 14:52:56 -- common/autotest_common.sh@10 -- # set +x 00:12:10.456 14:52:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.456 14:52:56 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.456 14:52:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.456 14:52:56 -- common/autotest_common.sh@10 -- # set +x 00:12:10.456 14:52:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.456 14:52:56 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.456 14:52:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.456 14:52:56 -- common/autotest_common.sh@10 -- # set +x 00:12:10.456 14:52:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.456 14:52:56 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.456 14:52:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.456 14:52:56 -- common/autotest_common.sh@10 -- # set +x 00:12:10.456 14:52:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.456 14:52:56 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:10.456 14:52:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.456 14:52:56 -- common/autotest_common.sh@10 -- # set +x 00:12:10.456 14:52:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.456 14:52:56 -- target/rpc.sh@110 -- # stats='{ 00:12:10.456 "tick_rate": 2700000000, 00:12:10.456 "poll_groups": [ 00:12:10.456 { 00:12:10.456 "name": "nvmf_tgt_poll_group_0", 00:12:10.456 "admin_qpairs": 2, 00:12:10.456 "io_qpairs": 84, 00:12:10.456 "current_admin_qpairs": 0, 00:12:10.456 "current_io_qpairs": 0, 00:12:10.456 "pending_bdev_io": 0, 00:12:10.456 "completed_nvme_io": 159, 00:12:10.456 "transports": [ 00:12:10.456 { 00:12:10.456 "trtype": "TCP" 00:12:10.456 } 00:12:10.456 ] 00:12:10.456 }, 00:12:10.456 { 00:12:10.456 "name": "nvmf_tgt_poll_group_1", 00:12:10.456 "admin_qpairs": 2, 00:12:10.456 "io_qpairs": 84, 00:12:10.456 "current_admin_qpairs": 0, 00:12:10.456 "current_io_qpairs": 0, 00:12:10.456 "pending_bdev_io": 0, 00:12:10.456 "completed_nvme_io": 137, 00:12:10.456 "transports": [ 00:12:10.456 { 00:12:10.456 "trtype": "TCP" 00:12:10.456 } 00:12:10.456 ] 00:12:10.456 }, 00:12:10.456 { 00:12:10.456 "name": "nvmf_tgt_poll_group_2", 00:12:10.456 "admin_qpairs": 1, 00:12:10.456 "io_qpairs": 84, 00:12:10.456 "current_admin_qpairs": 0, 00:12:10.456 "current_io_qpairs": 0, 00:12:10.456 "pending_bdev_io": 0, 00:12:10.456 "completed_nvme_io": 181, 00:12:10.456 "transports": [ 00:12:10.456 { 00:12:10.456 "trtype": "TCP" 00:12:10.456 } 00:12:10.456 ] 00:12:10.456 }, 00:12:10.456 { 00:12:10.456 "name": "nvmf_tgt_poll_group_3", 00:12:10.457 "admin_qpairs": 2, 00:12:10.457 "io_qpairs": 84, 00:12:10.457 "current_admin_qpairs": 0, 00:12:10.457 "current_io_qpairs": 0, 00:12:10.457 "pending_bdev_io": 0, 00:12:10.457 "completed_nvme_io": 209, 00:12:10.457 "transports": [ 00:12:10.457 { 00:12:10.457 "trtype": "TCP" 00:12:10.457 } 00:12:10.457 ] 00:12:10.457 } 00:12:10.457 ] 00:12:10.457 }' 00:12:10.457 14:52:56 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:10.457 14:52:56 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:10.457 14:52:56 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:10.457 14:52:56 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:10.457 14:52:56 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:10.457 14:52:56 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:10.457 14:52:56 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:10.457 14:52:56 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:10.457 14:52:56 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:10.457 14:52:56 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:10.457 14:52:56 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:10.457 14:52:56 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:10.457 14:52:56 -- target/rpc.sh@123 -- # nvmftestfini 00:12:10.457 14:52:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:10.457 14:52:56 -- nvmf/common.sh@117 -- # sync 00:12:10.457 14:52:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:10.457 14:52:56 -- nvmf/common.sh@120 -- # set +e 00:12:10.457 14:52:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:10.457 14:52:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:10.457 rmmod nvme_tcp 00:12:10.457 rmmod nvme_fabrics 00:12:10.457 rmmod nvme_keyring 00:12:10.457 14:52:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:10.714 14:52:56 -- nvmf/common.sh@124 -- # set -e 00:12:10.714 14:52:56 -- nvmf/common.sh@125 -- # return 0 00:12:10.714 14:52:56 -- nvmf/common.sh@478 -- # '[' -n 3712261 ']' 00:12:10.714 14:52:56 -- nvmf/common.sh@479 -- # killprocess 3712261 00:12:10.715 14:52:56 -- common/autotest_common.sh@936 -- # '[' -z 3712261 ']' 00:12:10.715 14:52:56 -- common/autotest_common.sh@940 -- # kill -0 3712261 00:12:10.715 14:52:56 -- common/autotest_common.sh@941 -- # uname 00:12:10.715 14:52:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:10.715 14:52:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3712261 00:12:10.715 14:52:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:10.715 14:52:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:10.715 14:52:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3712261' 00:12:10.715 killing process with pid 3712261 00:12:10.715 14:52:56 -- common/autotest_common.sh@955 -- # kill 3712261 00:12:10.715 14:52:56 -- common/autotest_common.sh@960 -- # wait 3712261 00:12:10.974 14:52:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:10.974 14:52:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:10.974 14:52:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:10.974 14:52:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:10.974 14:52:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:10.974 14:52:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.974 14:52:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.974 14:52:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.883 14:52:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:12.883 00:12:12.883 real 0m24.655s 00:12:12.883 user 1m20.235s 00:12:12.883 sys 0m3.838s 00:12:12.883 14:52:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:12.883 14:52:58 -- common/autotest_common.sh@10 -- # set +x 00:12:12.883 ************************************ 00:12:12.883 END TEST nvmf_rpc 00:12:12.883 ************************************ 00:12:12.883 14:52:58 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:12.883 14:52:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:12.883 14:52:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:12.883 14:52:58 -- common/autotest_common.sh@10 -- # set +x 00:12:13.141 ************************************ 00:12:13.141 START TEST nvmf_invalid 00:12:13.141 ************************************ 00:12:13.141 14:52:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:13.141 * Looking for test storage... 00:12:13.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.141 14:52:58 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.141 14:52:58 -- nvmf/common.sh@7 -- # uname -s 00:12:13.141 14:52:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.141 14:52:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.141 14:52:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.141 14:52:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.141 14:52:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.141 14:52:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.141 14:52:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.141 14:52:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.141 14:52:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.141 14:52:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.142 14:52:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:13.142 14:52:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:13.142 14:52:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.142 14:52:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.142 14:52:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.142 14:52:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.142 14:52:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.142 14:52:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.142 14:52:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.142 14:52:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.142 14:52:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.142 14:52:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.142 14:52:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.142 14:52:58 -- paths/export.sh@5 -- # export PATH 00:12:13.142 14:52:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.142 14:52:58 -- nvmf/common.sh@47 -- # : 0 00:12:13.142 14:52:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:13.142 14:52:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:13.142 14:52:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.142 14:52:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.142 14:52:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.142 14:52:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:13.142 14:52:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:13.142 14:52:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:13.142 14:52:58 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:13.142 14:52:58 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.142 14:52:58 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:13.142 14:52:58 -- target/invalid.sh@14 -- # target=foobar 00:12:13.142 14:52:58 -- target/invalid.sh@16 -- # RANDOM=0 00:12:13.142 14:52:58 -- target/invalid.sh@34 -- # nvmftestinit 00:12:13.142 14:52:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:13.142 14:52:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.142 14:52:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:13.142 14:52:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:13.142 14:52:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:13.142 14:52:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.142 14:52:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.142 14:52:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.142 14:52:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:13.142 14:52:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:13.142 14:52:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:13.142 14:52:58 -- common/autotest_common.sh@10 -- # set +x 00:12:15.065 14:53:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:15.065 14:53:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:15.065 14:53:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:15.065 14:53:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:15.065 14:53:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:15.065 14:53:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:15.065 14:53:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:15.065 14:53:00 -- nvmf/common.sh@295 -- # net_devs=() 00:12:15.065 14:53:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:15.065 14:53:00 -- nvmf/common.sh@296 -- # e810=() 00:12:15.065 14:53:00 -- nvmf/common.sh@296 -- # local -ga e810 00:12:15.065 14:53:00 -- nvmf/common.sh@297 -- # x722=() 00:12:15.065 14:53:00 -- nvmf/common.sh@297 -- # local -ga x722 00:12:15.065 14:53:00 -- nvmf/common.sh@298 -- # mlx=() 00:12:15.065 14:53:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:15.065 14:53:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.065 14:53:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.065 14:53:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.065 14:53:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.065 14:53:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.065 14:53:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.065 14:53:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.065 14:53:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.065 14:53:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.065 14:53:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.065 14:53:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.065 14:53:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:15.065 14:53:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:15.065 14:53:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:15.065 14:53:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:15.065 14:53:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:15.065 14:53:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:15.065 14:53:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.065 14:53:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:15.065 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:15.065 14:53:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.065 14:53:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.065 14:53:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.065 14:53:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.065 14:53:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.065 14:53:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.065 14:53:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:15.065 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:15.065 14:53:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.065 14:53:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.065 14:53:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.065 14:53:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.065 14:53:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.065 14:53:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:15.065 14:53:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:15.065 14:53:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:15.065 14:53:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.065 14:53:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.065 14:53:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:15.065 14:53:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.065 14:53:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:15.065 Found net devices under 0000:84:00.0: cvl_0_0 00:12:15.065 14:53:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.065 14:53:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.065 14:53:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.065 14:53:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:15.065 14:53:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.065 14:53:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:15.065 Found net devices under 0000:84:00.1: cvl_0_1 00:12:15.065 14:53:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.065 14:53:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:15.065 14:53:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:15.065 14:53:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:15.065 14:53:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:15.065 14:53:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:15.065 14:53:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.065 14:53:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.065 14:53:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.065 14:53:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:15.065 14:53:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.065 14:53:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.065 14:53:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:15.065 14:53:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.065 14:53:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.065 14:53:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:15.065 14:53:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:15.065 14:53:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.065 14:53:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.329 14:53:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.329 14:53:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.329 14:53:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:15.329 14:53:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.329 14:53:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.329 14:53:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.329 14:53:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:15.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:12:15.329 00:12:15.329 --- 10.0.0.2 ping statistics --- 00:12:15.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.329 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:12:15.329 14:53:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:12:15.329 00:12:15.329 --- 10.0.0.1 ping statistics --- 00:12:15.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.329 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:12:15.329 14:53:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.329 14:53:00 -- nvmf/common.sh@411 -- # return 0 00:12:15.329 14:53:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:15.329 14:53:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.329 14:53:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:15.329 14:53:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:15.329 14:53:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.329 14:53:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:15.329 14:53:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:15.329 14:53:00 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:15.329 14:53:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:15.329 14:53:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:15.329 14:53:00 -- common/autotest_common.sh@10 -- # set +x 00:12:15.329 14:53:00 -- nvmf/common.sh@470 -- # nvmfpid=3716674 00:12:15.329 14:53:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.329 14:53:00 -- nvmf/common.sh@471 -- # waitforlisten 3716674 00:12:15.329 14:53:00 -- common/autotest_common.sh@817 -- # '[' -z 3716674 ']' 00:12:15.329 14:53:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.329 14:53:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:15.329 14:53:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.329 14:53:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:15.329 14:53:00 -- common/autotest_common.sh@10 -- # set +x 00:12:15.329 [2024-04-26 14:53:00.968112] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:12:15.329 [2024-04-26 14:53:00.968191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.329 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.329 [2024-04-26 14:53:01.006841] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:15.329 [2024-04-26 14:53:01.034175] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.587 [2024-04-26 14:53:01.123737] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.587 [2024-04-26 14:53:01.123809] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.587 [2024-04-26 14:53:01.123845] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.587 [2024-04-26 14:53:01.123862] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.587 [2024-04-26 14:53:01.123877] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.587 [2024-04-26 14:53:01.123993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.587 [2024-04-26 14:53:01.124060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.587 [2024-04-26 14:53:01.124125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.587 [2024-04-26 14:53:01.124130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.587 14:53:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:15.587 14:53:01 -- common/autotest_common.sh@850 -- # return 0 00:12:15.587 14:53:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:15.587 14:53:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:15.587 14:53:01 -- common/autotest_common.sh@10 -- # set +x 00:12:15.587 14:53:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.587 14:53:01 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:15.587 14:53:01 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13137 00:12:15.844 [2024-04-26 14:53:01.512487] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:15.844 14:53:01 -- target/invalid.sh@40 -- # out='request: 00:12:15.844 { 00:12:15.844 "nqn": "nqn.2016-06.io.spdk:cnode13137", 00:12:15.844 "tgt_name": "foobar", 00:12:15.844 "method": "nvmf_create_subsystem", 00:12:15.844 "req_id": 1 00:12:15.844 } 00:12:15.844 Got JSON-RPC error response 00:12:15.844 response: 00:12:15.844 { 00:12:15.844 "code": -32603, 00:12:15.844 "message": "Unable to find target foobar" 00:12:15.844 }' 00:12:15.844 14:53:01 -- target/invalid.sh@41 -- # [[ request: 00:12:15.844 { 00:12:15.844 "nqn": "nqn.2016-06.io.spdk:cnode13137", 00:12:15.844 "tgt_name": "foobar", 00:12:15.844 "method": "nvmf_create_subsystem", 00:12:15.844 "req_id": 1 00:12:15.844 } 00:12:15.844 Got JSON-RPC error response 00:12:15.844 response: 00:12:15.844 { 00:12:15.844 "code": -32603, 00:12:15.844 "message": "Unable to find target foobar" 00:12:15.844 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:15.844 14:53:01 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:15.844 14:53:01 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11658 00:12:16.101 [2024-04-26 14:53:01.757304] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11658: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:16.101 14:53:01 -- target/invalid.sh@45 -- # out='request: 00:12:16.101 { 00:12:16.101 "nqn": "nqn.2016-06.io.spdk:cnode11658", 00:12:16.101 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:16.101 "method": "nvmf_create_subsystem", 00:12:16.101 "req_id": 1 00:12:16.101 } 00:12:16.101 Got JSON-RPC error response 00:12:16.101 response: 00:12:16.101 { 00:12:16.101 "code": -32602, 00:12:16.101 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:16.101 }' 00:12:16.101 14:53:01 -- target/invalid.sh@46 -- # [[ request: 00:12:16.101 { 00:12:16.101 "nqn": "nqn.2016-06.io.spdk:cnode11658", 00:12:16.101 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:16.101 "method": "nvmf_create_subsystem", 00:12:16.101 "req_id": 1 00:12:16.101 } 00:12:16.101 Got JSON-RPC error response 00:12:16.101 response: 00:12:16.101 { 00:12:16.101 "code": -32602, 00:12:16.101 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:16.101 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:16.101 14:53:01 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:16.101 14:53:01 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13550 00:12:16.359 [2024-04-26 14:53:01.994084] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13550: invalid model number 'SPDK_Controller' 00:12:16.359 14:53:02 -- target/invalid.sh@50 -- # out='request: 00:12:16.359 { 00:12:16.359 "nqn": "nqn.2016-06.io.spdk:cnode13550", 00:12:16.359 "model_number": "SPDK_Controller\u001f", 00:12:16.359 "method": "nvmf_create_subsystem", 00:12:16.359 "req_id": 1 00:12:16.359 } 00:12:16.359 Got JSON-RPC error response 00:12:16.359 response: 00:12:16.359 { 00:12:16.359 "code": -32602, 00:12:16.359 "message": "Invalid MN SPDK_Controller\u001f" 00:12:16.359 }' 00:12:16.359 14:53:02 -- target/invalid.sh@51 -- # [[ request: 00:12:16.359 { 00:12:16.359 "nqn": "nqn.2016-06.io.spdk:cnode13550", 00:12:16.359 "model_number": "SPDK_Controller\u001f", 00:12:16.359 "method": "nvmf_create_subsystem", 00:12:16.359 "req_id": 1 00:12:16.359 } 00:12:16.359 Got JSON-RPC error response 00:12:16.359 response: 00:12:16.359 { 00:12:16.359 "code": -32602, 00:12:16.359 "message": "Invalid MN SPDK_Controller\u001f" 00:12:16.359 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:16.359 14:53:02 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:16.359 14:53:02 -- target/invalid.sh@19 -- # local length=21 ll 00:12:16.359 14:53:02 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:16.359 14:53:02 -- target/invalid.sh@21 -- # local chars 00:12:16.359 14:53:02 -- target/invalid.sh@22 -- # local string 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # printf %x 45 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # string+=- 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # printf %x 79 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # string+=O 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # printf %x 117 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # string+=u 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # printf %x 117 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # string+=u 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # printf %x 91 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # string+='[' 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # printf %x 122 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # string+=z 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # printf %x 68 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # string+=D 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # printf %x 84 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # string+=T 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # printf %x 102 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # string+=f 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # printf %x 90 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # string+=Z 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # printf %x 36 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # string+='$' 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # printf %x 85 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # string+=U 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.359 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # printf %x 102 00:12:16.359 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # string+=f 00:12:16.360 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.360 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # printf %x 94 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # string+='^' 00:12:16.360 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.360 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # printf %x 73 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # string+=I 00:12:16.360 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.360 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # printf %x 111 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # string+=o 00:12:16.360 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.360 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # printf %x 63 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # string+='?' 00:12:16.360 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.360 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # printf %x 70 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # string+=F 00:12:16.360 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.360 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # printf %x 73 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # string+=I 00:12:16.360 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.360 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # printf %x 102 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # string+=f 00:12:16.360 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.360 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # printf %x 46 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:16.360 14:53:02 -- target/invalid.sh@25 -- # string+=. 00:12:16.360 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.360 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.360 14:53:02 -- target/invalid.sh@28 -- # [[ - == \- ]] 00:12:16.360 14:53:02 -- target/invalid.sh@29 -- # string='\-Ouu[zDTfZ$Uf^Io?FIf.' 00:12:16.360 14:53:02 -- target/invalid.sh@31 -- # echo '\-Ouu[zDTfZ$Uf^Io?FIf.' 00:12:16.360 14:53:02 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '\-Ouu[zDTfZ$Uf^Io?FIf.' nqn.2016-06.io.spdk:cnode30158 00:12:16.618 [2024-04-26 14:53:02.299097] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30158: invalid serial number '\-Ouu[zDTfZ$Uf^Io?FIf.' 00:12:16.618 14:53:02 -- target/invalid.sh@54 -- # out='request: 00:12:16.618 { 00:12:16.618 "nqn": "nqn.2016-06.io.spdk:cnode30158", 00:12:16.618 "serial_number": "\\-Ouu[zDTfZ$Uf^Io?FIf.", 00:12:16.618 "method": "nvmf_create_subsystem", 00:12:16.618 "req_id": 1 00:12:16.618 } 00:12:16.618 Got JSON-RPC error response 00:12:16.618 response: 00:12:16.618 { 00:12:16.618 "code": -32602, 00:12:16.618 "message": "Invalid SN \\-Ouu[zDTfZ$Uf^Io?FIf." 00:12:16.618 }' 00:12:16.618 14:53:02 -- target/invalid.sh@55 -- # [[ request: 00:12:16.618 { 00:12:16.618 "nqn": "nqn.2016-06.io.spdk:cnode30158", 00:12:16.618 "serial_number": "\\-Ouu[zDTfZ$Uf^Io?FIf.", 00:12:16.618 "method": "nvmf_create_subsystem", 00:12:16.618 "req_id": 1 00:12:16.618 } 00:12:16.618 Got JSON-RPC error response 00:12:16.618 response: 00:12:16.618 { 00:12:16.618 "code": -32602, 00:12:16.618 "message": "Invalid SN \\-Ouu[zDTfZ$Uf^Io?FIf." 00:12:16.618 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:16.618 14:53:02 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:16.618 14:53:02 -- target/invalid.sh@19 -- # local length=41 ll 00:12:16.618 14:53:02 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:16.618 14:53:02 -- target/invalid.sh@21 -- # local chars 00:12:16.618 14:53:02 -- target/invalid.sh@22 -- # local string 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # printf %x 48 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # string+=0 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # printf %x 70 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # string+=F 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # printf %x 76 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # string+=L 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # printf %x 121 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # string+=y 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # printf %x 125 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # string+='}' 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # printf %x 35 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # string+='#' 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # printf %x 48 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # string+=0 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # printf %x 99 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # string+=c 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # printf %x 101 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:16.618 14:53:02 -- target/invalid.sh@25 -- # string+=e 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.618 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 105 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=i 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 70 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=F 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 83 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=S 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 59 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=';' 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 92 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+='\' 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 70 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=F 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 35 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+='#' 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 127 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=$'\177' 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 113 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=q 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 47 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=/ 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 85 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=U 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 58 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=: 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 44 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=, 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 93 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=']' 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 46 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=. 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 55 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=7 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 66 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=B 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 100 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=d 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 99 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=c 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 105 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=i 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 41 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=')' 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 74 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=J 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 103 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=g 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 63 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+='?' 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 104 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=h 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 96 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+='`' 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 68 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=D 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 46 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=. 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 66 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=B 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 93 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=']' 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 121 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=y 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # printf %x 89 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:16.877 14:53:02 -- target/invalid.sh@25 -- # string+=Y 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.877 14:53:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.877 14:53:02 -- target/invalid.sh@28 -- # [[ 0 == \- ]] 00:12:16.877 14:53:02 -- target/invalid.sh@31 -- # echo '0FLy}#0ceiFS;\F#q/U:,].7Bdci)Jg?h`D.B]yY' 00:12:16.877 14:53:02 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '0FLy}#0ceiFS;\F#q/U:,].7Bdci)Jg?h`D.B]yY' nqn.2016-06.io.spdk:cnode4020 00:12:17.134 [2024-04-26 14:53:02.692367] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4020: invalid model number '0FLy}#0ceiFS;\F#q/U:,].7Bdci)Jg?h`D.B]yY' 00:12:17.134 14:53:02 -- target/invalid.sh@58 -- # out='request: 00:12:17.134 { 00:12:17.134 "nqn": "nqn.2016-06.io.spdk:cnode4020", 00:12:17.134 "model_number": "0FLy}#0ceiFS;\\F#\u007fq/U:,].7Bdci)Jg?h`D.B]yY", 00:12:17.134 "method": "nvmf_create_subsystem", 00:12:17.134 "req_id": 1 00:12:17.134 } 00:12:17.134 Got JSON-RPC error response 00:12:17.134 response: 00:12:17.134 { 00:12:17.134 "code": -32602, 00:12:17.134 "message": "Invalid MN 0FLy}#0ceiFS;\\F#\u007fq/U:,].7Bdci)Jg?h`D.B]yY" 00:12:17.134 }' 00:12:17.134 14:53:02 -- target/invalid.sh@59 -- # [[ request: 00:12:17.134 { 00:12:17.134 "nqn": "nqn.2016-06.io.spdk:cnode4020", 00:12:17.134 "model_number": "0FLy}#0ceiFS;\\F#\u007fq/U:,].7Bdci)Jg?h`D.B]yY", 00:12:17.134 "method": "nvmf_create_subsystem", 00:12:17.134 "req_id": 1 00:12:17.134 } 00:12:17.134 Got JSON-RPC error response 00:12:17.134 response: 00:12:17.134 { 00:12:17.134 "code": -32602, 00:12:17.134 "message": "Invalid MN 0FLy}#0ceiFS;\\F#\u007fq/U:,].7Bdci)Jg?h`D.B]yY" 00:12:17.134 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:17.135 14:53:02 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:17.392 [2024-04-26 14:53:02.929240] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.392 14:53:02 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:17.649 14:53:03 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:17.649 14:53:03 -- target/invalid.sh@67 -- # echo '' 00:12:17.649 14:53:03 -- target/invalid.sh@67 -- # head -n 1 00:12:17.649 14:53:03 -- target/invalid.sh@67 -- # IP= 00:12:17.649 14:53:03 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:17.907 [2024-04-26 14:53:03.418856] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:17.907 14:53:03 -- target/invalid.sh@69 -- # out='request: 00:12:17.907 { 00:12:17.907 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:17.907 "listen_address": { 00:12:17.907 "trtype": "tcp", 00:12:17.907 "traddr": "", 00:12:17.907 "trsvcid": "4421" 00:12:17.907 }, 00:12:17.907 "method": "nvmf_subsystem_remove_listener", 00:12:17.907 "req_id": 1 00:12:17.907 } 00:12:17.907 Got JSON-RPC error response 00:12:17.907 response: 00:12:17.907 { 00:12:17.907 "code": -32602, 00:12:17.907 "message": "Invalid parameters" 00:12:17.907 }' 00:12:17.907 14:53:03 -- target/invalid.sh@70 -- # [[ request: 00:12:17.907 { 00:12:17.907 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:17.907 "listen_address": { 00:12:17.907 "trtype": "tcp", 00:12:17.907 "traddr": "", 00:12:17.907 "trsvcid": "4421" 00:12:17.907 }, 00:12:17.907 "method": "nvmf_subsystem_remove_listener", 00:12:17.907 "req_id": 1 00:12:17.907 } 00:12:17.907 Got JSON-RPC error response 00:12:17.907 response: 00:12:17.907 { 00:12:17.907 "code": -32602, 00:12:17.907 "message": "Invalid parameters" 00:12:17.907 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:17.907 14:53:03 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31056 -i 0 00:12:18.165 [2024-04-26 14:53:03.663617] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31056: invalid cntlid range [0-65519] 00:12:18.165 14:53:03 -- target/invalid.sh@73 -- # out='request: 00:12:18.165 { 00:12:18.165 "nqn": "nqn.2016-06.io.spdk:cnode31056", 00:12:18.165 "min_cntlid": 0, 00:12:18.165 "method": "nvmf_create_subsystem", 00:12:18.165 "req_id": 1 00:12:18.165 } 00:12:18.165 Got JSON-RPC error response 00:12:18.165 response: 00:12:18.165 { 00:12:18.165 "code": -32602, 00:12:18.165 "message": "Invalid cntlid range [0-65519]" 00:12:18.165 }' 00:12:18.165 14:53:03 -- target/invalid.sh@74 -- # [[ request: 00:12:18.165 { 00:12:18.165 "nqn": "nqn.2016-06.io.spdk:cnode31056", 00:12:18.165 "min_cntlid": 0, 00:12:18.165 "method": "nvmf_create_subsystem", 00:12:18.165 "req_id": 1 00:12:18.165 } 00:12:18.165 Got JSON-RPC error response 00:12:18.165 response: 00:12:18.165 { 00:12:18.165 "code": -32602, 00:12:18.165 "message": "Invalid cntlid range [0-65519]" 00:12:18.165 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:18.165 14:53:03 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13636 -i 65520 00:12:18.165 [2024-04-26 14:53:03.900389] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13636: invalid cntlid range [65520-65519] 00:12:18.423 14:53:03 -- target/invalid.sh@75 -- # out='request: 00:12:18.423 { 00:12:18.423 "nqn": "nqn.2016-06.io.spdk:cnode13636", 00:12:18.423 "min_cntlid": 65520, 00:12:18.423 "method": "nvmf_create_subsystem", 00:12:18.423 "req_id": 1 00:12:18.423 } 00:12:18.423 Got JSON-RPC error response 00:12:18.423 response: 00:12:18.423 { 00:12:18.423 "code": -32602, 00:12:18.423 "message": "Invalid cntlid range [65520-65519]" 00:12:18.423 }' 00:12:18.423 14:53:03 -- target/invalid.sh@76 -- # [[ request: 00:12:18.423 { 00:12:18.423 "nqn": "nqn.2016-06.io.spdk:cnode13636", 00:12:18.423 "min_cntlid": 65520, 00:12:18.423 "method": "nvmf_create_subsystem", 00:12:18.423 "req_id": 1 00:12:18.423 } 00:12:18.423 Got JSON-RPC error response 00:12:18.423 response: 00:12:18.423 { 00:12:18.423 "code": -32602, 00:12:18.423 "message": "Invalid cntlid range [65520-65519]" 00:12:18.423 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:18.423 14:53:03 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode39 -I 0 00:12:18.423 [2024-04-26 14:53:04.141190] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode39: invalid cntlid range [1-0] 00:12:18.423 14:53:04 -- target/invalid.sh@77 -- # out='request: 00:12:18.423 { 00:12:18.423 "nqn": "nqn.2016-06.io.spdk:cnode39", 00:12:18.423 "max_cntlid": 0, 00:12:18.423 "method": "nvmf_create_subsystem", 00:12:18.423 "req_id": 1 00:12:18.423 } 00:12:18.423 Got JSON-RPC error response 00:12:18.423 response: 00:12:18.423 { 00:12:18.423 "code": -32602, 00:12:18.423 "message": "Invalid cntlid range [1-0]" 00:12:18.423 }' 00:12:18.423 14:53:04 -- target/invalid.sh@78 -- # [[ request: 00:12:18.423 { 00:12:18.423 "nqn": "nqn.2016-06.io.spdk:cnode39", 00:12:18.423 "max_cntlid": 0, 00:12:18.423 "method": "nvmf_create_subsystem", 00:12:18.423 "req_id": 1 00:12:18.423 } 00:12:18.423 Got JSON-RPC error response 00:12:18.423 response: 00:12:18.423 { 00:12:18.423 "code": -32602, 00:12:18.423 "message": "Invalid cntlid range [1-0]" 00:12:18.423 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:18.423 14:53:04 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26233 -I 65520 00:12:18.680 [2024-04-26 14:53:04.397999] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26233: invalid cntlid range [1-65520] 00:12:18.680 14:53:04 -- target/invalid.sh@79 -- # out='request: 00:12:18.680 { 00:12:18.680 "nqn": "nqn.2016-06.io.spdk:cnode26233", 00:12:18.680 "max_cntlid": 65520, 00:12:18.680 "method": "nvmf_create_subsystem", 00:12:18.680 "req_id": 1 00:12:18.680 } 00:12:18.680 Got JSON-RPC error response 00:12:18.680 response: 00:12:18.680 { 00:12:18.680 "code": -32602, 00:12:18.680 "message": "Invalid cntlid range [1-65520]" 00:12:18.680 }' 00:12:18.680 14:53:04 -- target/invalid.sh@80 -- # [[ request: 00:12:18.680 { 00:12:18.680 "nqn": "nqn.2016-06.io.spdk:cnode26233", 00:12:18.680 "max_cntlid": 65520, 00:12:18.680 "method": "nvmf_create_subsystem", 00:12:18.680 "req_id": 1 00:12:18.680 } 00:12:18.680 Got JSON-RPC error response 00:12:18.680 response: 00:12:18.680 { 00:12:18.680 "code": -32602, 00:12:18.680 "message": "Invalid cntlid range [1-65520]" 00:12:18.680 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:18.680 14:53:04 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13855 -i 6 -I 5 00:12:18.938 [2024-04-26 14:53:04.646845] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13855: invalid cntlid range [6-5] 00:12:18.938 14:53:04 -- target/invalid.sh@83 -- # out='request: 00:12:18.938 { 00:12:18.938 "nqn": "nqn.2016-06.io.spdk:cnode13855", 00:12:18.938 "min_cntlid": 6, 00:12:18.938 "max_cntlid": 5, 00:12:18.938 "method": "nvmf_create_subsystem", 00:12:18.938 "req_id": 1 00:12:18.938 } 00:12:18.938 Got JSON-RPC error response 00:12:18.938 response: 00:12:18.938 { 00:12:18.938 "code": -32602, 00:12:18.938 "message": "Invalid cntlid range [6-5]" 00:12:18.938 }' 00:12:18.938 14:53:04 -- target/invalid.sh@84 -- # [[ request: 00:12:18.938 { 00:12:18.938 "nqn": "nqn.2016-06.io.spdk:cnode13855", 00:12:18.938 "min_cntlid": 6, 00:12:18.938 "max_cntlid": 5, 00:12:18.938 "method": "nvmf_create_subsystem", 00:12:18.938 "req_id": 1 00:12:18.938 } 00:12:18.938 Got JSON-RPC error response 00:12:18.938 response: 00:12:18.938 { 00:12:18.938 "code": -32602, 00:12:18.938 "message": "Invalid cntlid range [6-5]" 00:12:18.938 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:18.938 14:53:04 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:19.196 14:53:04 -- target/invalid.sh@87 -- # out='request: 00:12:19.196 { 00:12:19.196 "name": "foobar", 00:12:19.196 "method": "nvmf_delete_target", 00:12:19.196 "req_id": 1 00:12:19.196 } 00:12:19.196 Got JSON-RPC error response 00:12:19.196 response: 00:12:19.196 { 00:12:19.196 "code": -32602, 00:12:19.196 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:19.196 }' 00:12:19.196 14:53:04 -- target/invalid.sh@88 -- # [[ request: 00:12:19.196 { 00:12:19.196 "name": "foobar", 00:12:19.196 "method": "nvmf_delete_target", 00:12:19.196 "req_id": 1 00:12:19.196 } 00:12:19.196 Got JSON-RPC error response 00:12:19.196 response: 00:12:19.196 { 00:12:19.196 "code": -32602, 00:12:19.196 "message": "The specified target doesn't exist, cannot delete it." 00:12:19.196 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:19.196 14:53:04 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:19.196 14:53:04 -- target/invalid.sh@91 -- # nvmftestfini 00:12:19.196 14:53:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:19.196 14:53:04 -- nvmf/common.sh@117 -- # sync 00:12:19.196 14:53:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:19.196 14:53:04 -- nvmf/common.sh@120 -- # set +e 00:12:19.196 14:53:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.196 14:53:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:19.196 rmmod nvme_tcp 00:12:19.196 rmmod nvme_fabrics 00:12:19.196 rmmod nvme_keyring 00:12:19.196 14:53:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.196 14:53:04 -- nvmf/common.sh@124 -- # set -e 00:12:19.196 14:53:04 -- nvmf/common.sh@125 -- # return 0 00:12:19.196 14:53:04 -- nvmf/common.sh@478 -- # '[' -n 3716674 ']' 00:12:19.196 14:53:04 -- nvmf/common.sh@479 -- # killprocess 3716674 00:12:19.196 14:53:04 -- common/autotest_common.sh@936 -- # '[' -z 3716674 ']' 00:12:19.197 14:53:04 -- common/autotest_common.sh@940 -- # kill -0 3716674 00:12:19.197 14:53:04 -- common/autotest_common.sh@941 -- # uname 00:12:19.197 14:53:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:19.197 14:53:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3716674 00:12:19.197 14:53:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:19.197 14:53:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:19.197 14:53:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3716674' 00:12:19.197 killing process with pid 3716674 00:12:19.197 14:53:04 -- common/autotest_common.sh@955 -- # kill 3716674 00:12:19.197 14:53:04 -- common/autotest_common.sh@960 -- # wait 3716674 00:12:19.456 14:53:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:19.456 14:53:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:19.456 14:53:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:19.456 14:53:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:19.456 14:53:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:19.456 14:53:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.456 14:53:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.456 14:53:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.989 14:53:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:21.989 00:12:21.989 real 0m8.487s 00:12:21.989 user 0m19.212s 00:12:21.989 sys 0m2.472s 00:12:21.989 14:53:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:21.989 14:53:07 -- common/autotest_common.sh@10 -- # set +x 00:12:21.989 ************************************ 00:12:21.989 END TEST nvmf_invalid 00:12:21.989 ************************************ 00:12:21.989 14:53:07 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:21.989 14:53:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:21.989 14:53:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:21.989 14:53:07 -- common/autotest_common.sh@10 -- # set +x 00:12:21.989 ************************************ 00:12:21.989 START TEST nvmf_abort 00:12:21.989 ************************************ 00:12:21.989 14:53:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:21.989 * Looking for test storage... 00:12:21.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.989 14:53:07 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.989 14:53:07 -- nvmf/common.sh@7 -- # uname -s 00:12:21.989 14:53:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.989 14:53:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.989 14:53:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.989 14:53:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.989 14:53:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.989 14:53:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.989 14:53:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.989 14:53:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.989 14:53:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.989 14:53:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.989 14:53:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:21.989 14:53:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:21.989 14:53:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.989 14:53:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.989 14:53:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.989 14:53:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.989 14:53:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.989 14:53:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.989 14:53:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.989 14:53:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.989 14:53:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.989 14:53:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.989 14:53:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.989 14:53:07 -- paths/export.sh@5 -- # export PATH 00:12:21.989 14:53:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.989 14:53:07 -- nvmf/common.sh@47 -- # : 0 00:12:21.989 14:53:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:21.989 14:53:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:21.989 14:53:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.989 14:53:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.989 14:53:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.989 14:53:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:21.989 14:53:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:21.989 14:53:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:21.989 14:53:07 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:21.989 14:53:07 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:21.989 14:53:07 -- target/abort.sh@14 -- # nvmftestinit 00:12:21.989 14:53:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:21.989 14:53:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.989 14:53:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:21.989 14:53:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:21.989 14:53:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:21.989 14:53:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.989 14:53:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.989 14:53:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.989 14:53:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:21.989 14:53:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:21.989 14:53:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:21.989 14:53:07 -- common/autotest_common.sh@10 -- # set +x 00:12:23.890 14:53:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:23.890 14:53:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:23.890 14:53:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:23.890 14:53:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:23.890 14:53:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:23.890 14:53:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:23.890 14:53:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:23.890 14:53:09 -- nvmf/common.sh@295 -- # net_devs=() 00:12:23.890 14:53:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:23.891 14:53:09 -- nvmf/common.sh@296 -- # e810=() 00:12:23.891 14:53:09 -- nvmf/common.sh@296 -- # local -ga e810 00:12:23.891 14:53:09 -- nvmf/common.sh@297 -- # x722=() 00:12:23.891 14:53:09 -- nvmf/common.sh@297 -- # local -ga x722 00:12:23.891 14:53:09 -- nvmf/common.sh@298 -- # mlx=() 00:12:23.891 14:53:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:23.891 14:53:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.891 14:53:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.891 14:53:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.891 14:53:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.891 14:53:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.891 14:53:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.891 14:53:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.891 14:53:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.891 14:53:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.891 14:53:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.891 14:53:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.891 14:53:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:23.891 14:53:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:23.891 14:53:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:23.891 14:53:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:23.891 14:53:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:23.891 14:53:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:23.891 14:53:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.891 14:53:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:23.891 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:23.891 14:53:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:23.891 14:53:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:23.891 14:53:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.891 14:53:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.891 14:53:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:23.891 14:53:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.891 14:53:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:23.891 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:23.891 14:53:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:23.891 14:53:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:23.891 14:53:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.891 14:53:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.891 14:53:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:23.891 14:53:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:23.891 14:53:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:23.891 14:53:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:23.891 14:53:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.891 14:53:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.891 14:53:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:23.891 14:53:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.891 14:53:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:23.891 Found net devices under 0000:84:00.0: cvl_0_0 00:12:23.891 14:53:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.891 14:53:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.891 14:53:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.891 14:53:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:23.891 14:53:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.891 14:53:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:23.891 Found net devices under 0000:84:00.1: cvl_0_1 00:12:23.891 14:53:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.891 14:53:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:23.891 14:53:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:23.891 14:53:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:23.891 14:53:09 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:23.891 14:53:09 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:23.891 14:53:09 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.891 14:53:09 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.891 14:53:09 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.891 14:53:09 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:23.891 14:53:09 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.891 14:53:09 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.891 14:53:09 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:23.891 14:53:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.891 14:53:09 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.891 14:53:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:23.891 14:53:09 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:23.891 14:53:09 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.891 14:53:09 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.891 14:53:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.891 14:53:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.891 14:53:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:23.891 14:53:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.891 14:53:09 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.891 14:53:09 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.891 14:53:09 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:23.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:12:23.891 00:12:23.891 --- 10.0.0.2 ping statistics --- 00:12:23.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.891 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:12:23.891 14:53:09 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:12:23.891 00:12:23.891 --- 10.0.0.1 ping statistics --- 00:12:23.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.891 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:12:23.891 14:53:09 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.891 14:53:09 -- nvmf/common.sh@411 -- # return 0 00:12:23.891 14:53:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:23.891 14:53:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.891 14:53:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:23.891 14:53:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:23.891 14:53:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.891 14:53:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:23.891 14:53:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:23.891 14:53:09 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:23.891 14:53:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:23.891 14:53:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:23.891 14:53:09 -- common/autotest_common.sh@10 -- # set +x 00:12:23.891 14:53:09 -- nvmf/common.sh@470 -- # nvmfpid=3719312 00:12:23.891 14:53:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:23.891 14:53:09 -- nvmf/common.sh@471 -- # waitforlisten 3719312 00:12:23.891 14:53:09 -- common/autotest_common.sh@817 -- # '[' -z 3719312 ']' 00:12:23.891 14:53:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.891 14:53:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:23.891 14:53:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.891 14:53:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:23.891 14:53:09 -- common/autotest_common.sh@10 -- # set +x 00:12:23.891 [2024-04-26 14:53:09.473822] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:12:23.891 [2024-04-26 14:53:09.473889] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.891 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.891 [2024-04-26 14:53:09.513395] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:23.891 [2024-04-26 14:53:09.545847] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:24.149 [2024-04-26 14:53:09.641683] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.149 [2024-04-26 14:53:09.641748] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.149 [2024-04-26 14:53:09.641765] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.149 [2024-04-26 14:53:09.641779] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.149 [2024-04-26 14:53:09.641792] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.149 [2024-04-26 14:53:09.641880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.149 [2024-04-26 14:53:09.641956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.149 [2024-04-26 14:53:09.641959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.149 14:53:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:24.149 14:53:09 -- common/autotest_common.sh@850 -- # return 0 00:12:24.149 14:53:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:24.149 14:53:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:24.149 14:53:09 -- common/autotest_common.sh@10 -- # set +x 00:12:24.149 14:53:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.149 14:53:09 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:24.149 14:53:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.150 14:53:09 -- common/autotest_common.sh@10 -- # set +x 00:12:24.150 [2024-04-26 14:53:09.789233] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.150 14:53:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.150 14:53:09 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:24.150 14:53:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.150 14:53:09 -- common/autotest_common.sh@10 -- # set +x 00:12:24.150 Malloc0 00:12:24.150 14:53:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.150 14:53:09 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:24.150 14:53:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.150 14:53:09 -- common/autotest_common.sh@10 -- # set +x 00:12:24.150 Delay0 00:12:24.150 14:53:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.150 14:53:09 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:24.150 14:53:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.150 14:53:09 -- common/autotest_common.sh@10 -- # set +x 00:12:24.150 14:53:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.150 14:53:09 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:24.150 14:53:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.150 14:53:09 -- common/autotest_common.sh@10 -- # set +x 00:12:24.150 14:53:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.150 14:53:09 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:24.150 14:53:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.150 14:53:09 -- common/autotest_common.sh@10 -- # set +x 00:12:24.150 [2024-04-26 14:53:09.863983] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.150 14:53:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.150 14:53:09 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:24.150 14:53:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.150 14:53:09 -- common/autotest_common.sh@10 -- # set +x 00:12:24.150 14:53:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.150 14:53:09 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:24.407 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.407 [2024-04-26 14:53:10.011158] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:26.934 Initializing NVMe Controllers 00:12:26.934 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:26.934 controller IO queue size 128 less than required 00:12:26.934 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:26.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:26.934 Initialization complete. Launching workers. 00:12:26.934 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 34468 00:12:26.934 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34533, failed to submit 62 00:12:26.934 success 34472, unsuccess 61, failed 0 00:12:26.934 14:53:12 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:26.934 14:53:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.934 14:53:12 -- common/autotest_common.sh@10 -- # set +x 00:12:26.934 14:53:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.934 14:53:12 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:26.934 14:53:12 -- target/abort.sh@38 -- # nvmftestfini 00:12:26.934 14:53:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:26.934 14:53:12 -- nvmf/common.sh@117 -- # sync 00:12:26.934 14:53:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:26.934 14:53:12 -- nvmf/common.sh@120 -- # set +e 00:12:26.934 14:53:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:26.934 14:53:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:26.934 rmmod nvme_tcp 00:12:26.934 rmmod nvme_fabrics 00:12:26.934 rmmod nvme_keyring 00:12:26.934 14:53:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:26.934 14:53:12 -- nvmf/common.sh@124 -- # set -e 00:12:26.934 14:53:12 -- nvmf/common.sh@125 -- # return 0 00:12:26.934 14:53:12 -- nvmf/common.sh@478 -- # '[' -n 3719312 ']' 00:12:26.934 14:53:12 -- nvmf/common.sh@479 -- # killprocess 3719312 00:12:26.934 14:53:12 -- common/autotest_common.sh@936 -- # '[' -z 3719312 ']' 00:12:26.934 14:53:12 -- common/autotest_common.sh@940 -- # kill -0 3719312 00:12:26.934 14:53:12 -- common/autotest_common.sh@941 -- # uname 00:12:26.934 14:53:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:26.934 14:53:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3719312 00:12:26.934 14:53:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:26.934 14:53:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:26.934 14:53:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3719312' 00:12:26.934 killing process with pid 3719312 00:12:26.934 14:53:12 -- common/autotest_common.sh@955 -- # kill 3719312 00:12:26.934 14:53:12 -- common/autotest_common.sh@960 -- # wait 3719312 00:12:26.934 14:53:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:26.934 14:53:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:26.934 14:53:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:26.934 14:53:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:26.934 14:53:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:26.934 14:53:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.934 14:53:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.934 14:53:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.468 14:53:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:29.468 00:12:29.468 real 0m7.338s 00:12:29.468 user 0m10.845s 00:12:29.468 sys 0m2.658s 00:12:29.468 14:53:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:29.468 14:53:14 -- common/autotest_common.sh@10 -- # set +x 00:12:29.468 ************************************ 00:12:29.468 END TEST nvmf_abort 00:12:29.468 ************************************ 00:12:29.468 14:53:14 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:29.468 14:53:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:29.468 14:53:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:29.468 14:53:14 -- common/autotest_common.sh@10 -- # set +x 00:12:29.468 ************************************ 00:12:29.468 START TEST nvmf_ns_hotplug_stress 00:12:29.468 ************************************ 00:12:29.468 14:53:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:29.468 * Looking for test storage... 00:12:29.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.468 14:53:14 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.468 14:53:14 -- nvmf/common.sh@7 -- # uname -s 00:12:29.468 14:53:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.468 14:53:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.468 14:53:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.468 14:53:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.468 14:53:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.468 14:53:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.468 14:53:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.468 14:53:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.468 14:53:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.468 14:53:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.468 14:53:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:29.468 14:53:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:29.468 14:53:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.468 14:53:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.468 14:53:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.468 14:53:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.468 14:53:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.468 14:53:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.468 14:53:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.468 14:53:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.468 14:53:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.468 14:53:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.468 14:53:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.468 14:53:14 -- paths/export.sh@5 -- # export PATH 00:12:29.468 14:53:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.468 14:53:14 -- nvmf/common.sh@47 -- # : 0 00:12:29.468 14:53:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:29.468 14:53:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:29.468 14:53:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.468 14:53:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.468 14:53:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.468 14:53:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:29.468 14:53:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:29.468 14:53:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:29.468 14:53:14 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:29.468 14:53:14 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:12:29.468 14:53:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:29.468 14:53:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.468 14:53:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:29.468 14:53:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:29.468 14:53:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:29.468 14:53:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.468 14:53:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.468 14:53:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.468 14:53:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:29.468 14:53:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:29.468 14:53:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:29.468 14:53:14 -- common/autotest_common.sh@10 -- # set +x 00:12:31.372 14:53:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:31.372 14:53:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:31.372 14:53:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:31.372 14:53:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:31.372 14:53:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:31.372 14:53:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:31.372 14:53:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:31.372 14:53:16 -- nvmf/common.sh@295 -- # net_devs=() 00:12:31.372 14:53:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:31.372 14:53:16 -- nvmf/common.sh@296 -- # e810=() 00:12:31.372 14:53:16 -- nvmf/common.sh@296 -- # local -ga e810 00:12:31.372 14:53:16 -- nvmf/common.sh@297 -- # x722=() 00:12:31.372 14:53:16 -- nvmf/common.sh@297 -- # local -ga x722 00:12:31.372 14:53:16 -- nvmf/common.sh@298 -- # mlx=() 00:12:31.372 14:53:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:31.372 14:53:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.372 14:53:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.372 14:53:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.372 14:53:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.372 14:53:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.372 14:53:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.372 14:53:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.372 14:53:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.372 14:53:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.373 14:53:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.373 14:53:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.373 14:53:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:31.373 14:53:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:31.373 14:53:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:31.373 14:53:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:31.373 14:53:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:31.373 14:53:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:31.373 14:53:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.373 14:53:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:31.373 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:31.373 14:53:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.373 14:53:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.373 14:53:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.373 14:53:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.373 14:53:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.373 14:53:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.373 14:53:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:31.373 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:31.373 14:53:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.373 14:53:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.373 14:53:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.373 14:53:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.373 14:53:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.373 14:53:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:31.373 14:53:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:31.373 14:53:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:31.373 14:53:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.373 14:53:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.373 14:53:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:31.373 14:53:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.373 14:53:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:31.373 Found net devices under 0000:84:00.0: cvl_0_0 00:12:31.373 14:53:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.373 14:53:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.373 14:53:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.373 14:53:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:31.373 14:53:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.373 14:53:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:31.373 Found net devices under 0000:84:00.1: cvl_0_1 00:12:31.373 14:53:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.373 14:53:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:31.373 14:53:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:31.373 14:53:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:31.373 14:53:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:31.373 14:53:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:31.373 14:53:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.373 14:53:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.373 14:53:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.373 14:53:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:31.373 14:53:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.373 14:53:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.373 14:53:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:31.373 14:53:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.373 14:53:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.373 14:53:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:31.373 14:53:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:31.373 14:53:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.373 14:53:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.373 14:53:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.373 14:53:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.373 14:53:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:31.373 14:53:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.373 14:53:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.373 14:53:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.373 14:53:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:31.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:12:31.373 00:12:31.373 --- 10.0.0.2 ping statistics --- 00:12:31.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.373 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:12:31.373 14:53:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:12:31.373 00:12:31.373 --- 10.0.0.1 ping statistics --- 00:12:31.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.373 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:12:31.373 14:53:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.373 14:53:17 -- nvmf/common.sh@411 -- # return 0 00:12:31.373 14:53:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:31.373 14:53:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.373 14:53:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:31.373 14:53:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:31.373 14:53:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.373 14:53:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:31.373 14:53:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:31.373 14:53:17 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:12:31.373 14:53:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:31.373 14:53:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:31.373 14:53:17 -- common/autotest_common.sh@10 -- # set +x 00:12:31.373 14:53:17 -- nvmf/common.sh@470 -- # nvmfpid=3721675 00:12:31.373 14:53:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:31.373 14:53:17 -- nvmf/common.sh@471 -- # waitforlisten 3721675 00:12:31.373 14:53:17 -- common/autotest_common.sh@817 -- # '[' -z 3721675 ']' 00:12:31.373 14:53:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.373 14:53:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:31.373 14:53:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.373 14:53:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:31.373 14:53:17 -- common/autotest_common.sh@10 -- # set +x 00:12:31.373 [2024-04-26 14:53:17.082605] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:12:31.373 [2024-04-26 14:53:17.082699] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.665 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.665 [2024-04-26 14:53:17.131258] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:31.665 [2024-04-26 14:53:17.162615] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:31.665 [2024-04-26 14:53:17.255277] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.665 [2024-04-26 14:53:17.255346] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.665 [2024-04-26 14:53:17.255360] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.665 [2024-04-26 14:53:17.255372] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.665 [2024-04-26 14:53:17.255382] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.665 [2024-04-26 14:53:17.259055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.665 [2024-04-26 14:53:17.259141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.665 [2024-04-26 14:53:17.259143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.665 14:53:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:31.665 14:53:17 -- common/autotest_common.sh@850 -- # return 0 00:12:31.665 14:53:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:31.665 14:53:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:31.665 14:53:17 -- common/autotest_common.sh@10 -- # set +x 00:12:31.948 14:53:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.948 14:53:17 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:12:31.948 14:53:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:31.948 [2024-04-26 14:53:17.623121] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.948 14:53:17 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:32.206 14:53:17 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.464 [2024-04-26 14:53:18.105917] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.464 14:53:18 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:32.722 14:53:18 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:32.981 Malloc0 00:12:32.981 14:53:18 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:33.238 Delay0 00:12:33.238 14:53:18 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.495 14:53:19 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:33.753 NULL1 00:12:33.753 14:53:19 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:34.318 14:53:19 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=3721982 00:12:34.318 14:53:19 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:34.318 14:53:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:34.318 14:53:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.318 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.250 Read completed with error (sct=0, sc=11) 00:12:35.250 14:53:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.764 14:53:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:12:35.764 14:53:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:36.022 true 00:12:36.022 14:53:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:36.022 14:53:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.586 14:53:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:37.101 14:53:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:12:37.101 14:53:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:37.358 true 00:12:37.358 14:53:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:37.358 14:53:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.615 14:53:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.872 14:53:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:12:37.872 14:53:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:37.872 true 00:12:38.129 14:53:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:38.129 14:53:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.062 14:53:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.062 14:53:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:12:39.062 14:53:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:39.319 true 00:12:39.319 14:53:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:39.319 14:53:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.576 14:53:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.834 14:53:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:12:39.834 14:53:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:40.091 true 00:12:40.091 14:53:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:40.091 14:53:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.022 14:53:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.278 14:53:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:12:41.278 14:53:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:41.535 true 00:12:41.535 14:53:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:41.535 14:53:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.792 14:53:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.792 14:53:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:12:41.792 14:53:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:42.048 true 00:12:42.048 14:53:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:42.048 14:53:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.977 14:53:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:43.235 14:53:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:12:43.235 14:53:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:43.493 true 00:12:43.493 14:53:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:43.493 14:53:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.750 14:53:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.007 14:53:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:12:44.007 14:53:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:44.264 true 00:12:44.264 14:53:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:44.265 14:53:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.219 14:53:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:45.488 14:53:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:12:45.488 14:53:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:45.746 true 00:12:45.746 14:53:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:45.746 14:53:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.003 14:53:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.260 14:53:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:12:46.260 14:53:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:46.517 true 00:12:46.517 14:53:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:46.518 14:53:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.777 14:53:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.035 14:53:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:12:47.035 14:53:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:47.292 true 00:12:47.292 14:53:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:47.292 14:53:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.222 14:53:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.479 14:53:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:12:48.479 14:53:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:48.737 true 00:12:48.737 14:53:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:48.737 14:53:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.994 14:53:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.252 14:53:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:12:49.252 14:53:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:49.510 true 00:12:49.510 14:53:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:49.510 14:53:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.442 14:53:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.699 14:53:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:12:50.699 14:53:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:50.699 true 00:12:50.699 14:53:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:50.699 14:53:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.957 14:53:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.214 14:53:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:12:51.215 14:53:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:51.473 true 00:12:51.473 14:53:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:51.473 14:53:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.407 14:53:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.665 14:53:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:12:52.665 14:53:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:52.923 true 00:12:52.923 14:53:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:52.923 14:53:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.181 14:53:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.438 14:53:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:12:53.438 14:53:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:53.696 true 00:12:53.696 14:53:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:53.696 14:53:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:54.631 14:53:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:54.888 14:53:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:12:54.888 14:53:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:55.145 true 00:12:55.145 14:53:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:55.145 14:53:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.402 14:53:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.402 14:53:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:12:55.402 14:53:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:55.658 true 00:12:55.658 14:53:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:55.659 14:53:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.591 14:53:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.848 14:53:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:12:56.848 14:53:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:57.105 true 00:12:57.105 14:53:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:57.105 14:53:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.362 14:53:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.619 14:53:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:12:57.619 14:53:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:57.877 true 00:12:57.877 14:53:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:57.877 14:53:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.833 14:53:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.091 14:53:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:12:59.091 14:53:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:59.348 true 00:12:59.348 14:53:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:12:59.348 14:53:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.605 14:53:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.862 14:53:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:12:59.862 14:53:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:00.120 true 00:13:00.120 14:53:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:13:00.120 14:53:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.378 14:53:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.635 14:53:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:13:00.635 14:53:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:00.893 true 00:13:00.893 14:53:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:13:00.893 14:53:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.825 14:53:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.083 14:53:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:13:02.083 14:53:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:02.340 true 00:13:02.597 14:53:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:13:02.597 14:53:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.161 14:53:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:03.419 14:53:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:13:03.419 14:53:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:03.676 true 00:13:03.676 14:53:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:13:03.676 14:53:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.934 14:53:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.191 14:53:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:13:04.191 14:53:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:04.449 true 00:13:04.449 14:53:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:13:04.449 14:53:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.381 Initializing NVMe Controllers 00:13:05.381 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:05.381 Controller IO queue size 128, less than required. 00:13:05.381 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:05.381 Controller IO queue size 128, less than required. 00:13:05.381 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:05.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:05.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:05.381 Initialization complete. Launching workers. 00:13:05.381 ======================================================== 00:13:05.381 Latency(us) 00:13:05.381 Device Information : IOPS MiB/s Average min max 00:13:05.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 931.93 0.46 76908.33 2788.11 1012728.02 00:13:05.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11104.80 5.42 11526.46 2818.76 535643.75 00:13:05.381 ======================================================== 00:13:05.381 Total : 12036.73 5.88 16588.59 2788.11 1012728.02 00:13:05.381 00:13:05.381 14:53:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.638 14:53:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:13:05.638 14:53:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:05.896 true 00:13:05.896 14:53:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3721982 00:13:05.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (3721982) - No such process 00:13:05.896 14:53:51 -- target/ns_hotplug_stress.sh@44 -- # wait 3721982 00:13:05.896 14:53:51 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:05.896 14:53:51 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:13:05.896 14:53:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:05.896 14:53:51 -- nvmf/common.sh@117 -- # sync 00:13:05.896 14:53:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:05.896 14:53:51 -- nvmf/common.sh@120 -- # set +e 00:13:05.896 14:53:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:05.896 14:53:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:05.896 rmmod nvme_tcp 00:13:05.896 rmmod nvme_fabrics 00:13:05.896 rmmod nvme_keyring 00:13:05.896 14:53:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:05.896 14:53:51 -- nvmf/common.sh@124 -- # set -e 00:13:05.896 14:53:51 -- nvmf/common.sh@125 -- # return 0 00:13:05.896 14:53:51 -- nvmf/common.sh@478 -- # '[' -n 3721675 ']' 00:13:05.896 14:53:51 -- nvmf/common.sh@479 -- # killprocess 3721675 00:13:05.896 14:53:51 -- common/autotest_common.sh@936 -- # '[' -z 3721675 ']' 00:13:05.896 14:53:51 -- common/autotest_common.sh@940 -- # kill -0 3721675 00:13:05.896 14:53:51 -- common/autotest_common.sh@941 -- # uname 00:13:05.896 14:53:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:05.896 14:53:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3721675 00:13:05.896 14:53:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:05.896 14:53:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:05.896 14:53:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3721675' 00:13:05.896 killing process with pid 3721675 00:13:05.896 14:53:51 -- common/autotest_common.sh@955 -- # kill 3721675 00:13:05.896 14:53:51 -- common/autotest_common.sh@960 -- # wait 3721675 00:13:06.154 14:53:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:06.154 14:53:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:06.154 14:53:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:06.154 14:53:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:06.154 14:53:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:06.154 14:53:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.154 14:53:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.154 14:53:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.688 14:53:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:08.688 00:13:08.688 real 0m39.174s 00:13:08.688 user 2m31.663s 00:13:08.688 sys 0m10.951s 00:13:08.688 14:53:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:08.688 14:53:53 -- common/autotest_common.sh@10 -- # set +x 00:13:08.688 ************************************ 00:13:08.688 END TEST nvmf_ns_hotplug_stress 00:13:08.688 ************************************ 00:13:08.688 14:53:53 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:08.688 14:53:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:08.688 14:53:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:08.688 14:53:53 -- common/autotest_common.sh@10 -- # set +x 00:13:08.688 ************************************ 00:13:08.688 START TEST nvmf_connect_stress 00:13:08.688 ************************************ 00:13:08.688 14:53:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:08.688 * Looking for test storage... 00:13:08.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.688 14:53:54 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.688 14:53:54 -- nvmf/common.sh@7 -- # uname -s 00:13:08.688 14:53:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.688 14:53:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.688 14:53:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.688 14:53:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.688 14:53:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.688 14:53:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.688 14:53:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.688 14:53:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.688 14:53:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.688 14:53:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.688 14:53:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:08.688 14:53:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:08.688 14:53:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.688 14:53:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.688 14:53:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.688 14:53:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.688 14:53:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.688 14:53:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.688 14:53:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.688 14:53:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.688 14:53:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.688 14:53:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.688 14:53:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.688 14:53:54 -- paths/export.sh@5 -- # export PATH 00:13:08.688 14:53:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.688 14:53:54 -- nvmf/common.sh@47 -- # : 0 00:13:08.688 14:53:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.688 14:53:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.688 14:53:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.688 14:53:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.688 14:53:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.688 14:53:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.688 14:53:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.688 14:53:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.688 14:53:54 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:08.688 14:53:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:08.688 14:53:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.688 14:53:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:08.688 14:53:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:08.688 14:53:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:08.688 14:53:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.688 14:53:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.688 14:53:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.688 14:53:54 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:08.688 14:53:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:08.688 14:53:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:08.688 14:53:54 -- common/autotest_common.sh@10 -- # set +x 00:13:10.591 14:53:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:10.591 14:53:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:10.591 14:53:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:10.591 14:53:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:10.591 14:53:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:10.591 14:53:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:10.591 14:53:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:10.591 14:53:56 -- nvmf/common.sh@295 -- # net_devs=() 00:13:10.591 14:53:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:10.591 14:53:56 -- nvmf/common.sh@296 -- # e810=() 00:13:10.591 14:53:56 -- nvmf/common.sh@296 -- # local -ga e810 00:13:10.591 14:53:56 -- nvmf/common.sh@297 -- # x722=() 00:13:10.591 14:53:56 -- nvmf/common.sh@297 -- # local -ga x722 00:13:10.591 14:53:56 -- nvmf/common.sh@298 -- # mlx=() 00:13:10.591 14:53:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:10.591 14:53:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.591 14:53:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.591 14:53:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.591 14:53:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.591 14:53:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.591 14:53:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.591 14:53:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.591 14:53:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.591 14:53:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.591 14:53:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.591 14:53:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.591 14:53:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:10.591 14:53:56 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:10.591 14:53:56 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:10.591 14:53:56 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:10.591 14:53:56 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:10.591 14:53:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:10.591 14:53:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.591 14:53:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:10.591 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:10.591 14:53:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.591 14:53:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.591 14:53:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.591 14:53:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.591 14:53:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.591 14:53:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.591 14:53:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:10.591 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:10.591 14:53:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.591 14:53:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.591 14:53:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.591 14:53:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.591 14:53:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.591 14:53:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:10.591 14:53:56 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:10.591 14:53:56 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:10.591 14:53:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.591 14:53:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.591 14:53:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:10.591 14:53:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.591 14:53:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:10.591 Found net devices under 0000:84:00.0: cvl_0_0 00:13:10.591 14:53:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.591 14:53:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.591 14:53:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.592 14:53:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:10.592 14:53:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.592 14:53:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:10.592 Found net devices under 0000:84:00.1: cvl_0_1 00:13:10.592 14:53:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.592 14:53:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:10.592 14:53:56 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:10.592 14:53:56 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:10.592 14:53:56 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:10.592 14:53:56 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:10.592 14:53:56 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.592 14:53:56 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.592 14:53:56 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:10.592 14:53:56 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:10.592 14:53:56 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:10.592 14:53:56 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:10.592 14:53:56 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:10.592 14:53:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:10.592 14:53:56 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.592 14:53:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:10.592 14:53:56 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:10.592 14:53:56 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:10.592 14:53:56 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:10.592 14:53:56 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:10.592 14:53:56 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:10.592 14:53:56 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:10.592 14:53:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:10.592 14:53:56 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:10.592 14:53:56 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:10.592 14:53:56 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:10.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:13:10.592 00:13:10.592 --- 10.0.0.2 ping statistics --- 00:13:10.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.592 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:13:10.592 14:53:56 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:10.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:13:10.592 00:13:10.592 --- 10.0.0.1 ping statistics --- 00:13:10.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.592 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:13:10.592 14:53:56 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.592 14:53:56 -- nvmf/common.sh@411 -- # return 0 00:13:10.592 14:53:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:10.592 14:53:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.592 14:53:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:10.592 14:53:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:10.592 14:53:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.592 14:53:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:10.592 14:53:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:10.592 14:53:56 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:10.592 14:53:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:10.592 14:53:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:10.592 14:53:56 -- common/autotest_common.sh@10 -- # set +x 00:13:10.592 14:53:56 -- nvmf/common.sh@470 -- # nvmfpid=3727833 00:13:10.592 14:53:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:10.592 14:53:56 -- nvmf/common.sh@471 -- # waitforlisten 3727833 00:13:10.592 14:53:56 -- common/autotest_common.sh@817 -- # '[' -z 3727833 ']' 00:13:10.592 14:53:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.592 14:53:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:10.592 14:53:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.592 14:53:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:10.592 14:53:56 -- common/autotest_common.sh@10 -- # set +x 00:13:10.851 [2024-04-26 14:53:56.375448] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:13:10.851 [2024-04-26 14:53:56.375536] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.851 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.851 [2024-04-26 14:53:56.414534] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:10.851 [2024-04-26 14:53:56.441908] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:10.851 [2024-04-26 14:53:56.525643] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.851 [2024-04-26 14:53:56.525702] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.851 [2024-04-26 14:53:56.525728] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.851 [2024-04-26 14:53:56.525742] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.851 [2024-04-26 14:53:56.525754] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.851 [2024-04-26 14:53:56.525850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.851 [2024-04-26 14:53:56.525919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.851 [2024-04-26 14:53:56.525921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.109 14:53:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:11.109 14:53:56 -- common/autotest_common.sh@850 -- # return 0 00:13:11.109 14:53:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:11.109 14:53:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:11.109 14:53:56 -- common/autotest_common.sh@10 -- # set +x 00:13:11.109 14:53:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.109 14:53:56 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:11.109 14:53:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.109 14:53:56 -- common/autotest_common.sh@10 -- # set +x 00:13:11.109 [2024-04-26 14:53:56.678596] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.109 14:53:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.109 14:53:56 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:11.109 14:53:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.109 14:53:56 -- common/autotest_common.sh@10 -- # set +x 00:13:11.109 14:53:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.109 14:53:56 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.109 14:53:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.109 14:53:56 -- common/autotest_common.sh@10 -- # set +x 00:13:11.109 [2024-04-26 14:53:56.714236] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.109 14:53:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.109 14:53:56 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:11.109 14:53:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.109 14:53:56 -- common/autotest_common.sh@10 -- # set +x 00:13:11.109 NULL1 00:13:11.109 14:53:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.109 14:53:56 -- target/connect_stress.sh@21 -- # PERF_PID=3727859 00:13:11.109 14:53:56 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:11.109 14:53:56 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:11.109 14:53:56 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:11.109 14:53:56 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:11.109 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.109 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.109 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.109 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.109 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.109 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.109 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.109 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.109 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.109 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.109 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.109 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.109 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.109 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.109 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.110 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.110 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.110 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.110 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.110 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.110 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.110 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.110 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.110 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.110 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.110 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.110 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.110 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.110 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.110 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.110 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.110 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.110 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.110 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.110 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.110 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.110 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.110 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.110 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.110 14:53:56 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:11.110 14:53:56 -- target/connect_stress.sh@28 -- # cat 00:13:11.110 14:53:56 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:11.110 14:53:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.110 14:53:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.110 14:53:56 -- common/autotest_common.sh@10 -- # set +x 00:13:11.368 14:53:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.368 14:53:57 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:11.368 14:53:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.368 14:53:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.368 14:53:57 -- common/autotest_common.sh@10 -- # set +x 00:13:11.933 14:53:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.933 14:53:57 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:11.933 14:53:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.933 14:53:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.933 14:53:57 -- common/autotest_common.sh@10 -- # set +x 00:13:12.191 14:53:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:12.191 14:53:57 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:12.191 14:53:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.191 14:53:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:12.191 14:53:57 -- common/autotest_common.sh@10 -- # set +x 00:13:12.449 14:53:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:12.449 14:53:58 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:12.449 14:53:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.449 14:53:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:12.449 14:53:58 -- common/autotest_common.sh@10 -- # set +x 00:13:12.706 14:53:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:12.706 14:53:58 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:12.706 14:53:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.706 14:53:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:12.706 14:53:58 -- common/autotest_common.sh@10 -- # set +x 00:13:13.278 14:53:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.278 14:53:58 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:13.278 14:53:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.278 14:53:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.278 14:53:58 -- common/autotest_common.sh@10 -- # set +x 00:13:13.580 14:53:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.580 14:53:59 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:13.580 14:53:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.580 14:53:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.580 14:53:59 -- common/autotest_common.sh@10 -- # set +x 00:13:13.844 14:53:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.844 14:53:59 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:13.844 14:53:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.844 14:53:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.844 14:53:59 -- common/autotest_common.sh@10 -- # set +x 00:13:14.101 14:53:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:14.101 14:53:59 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:14.101 14:53:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.101 14:53:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:14.101 14:53:59 -- common/autotest_common.sh@10 -- # set +x 00:13:14.358 14:53:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:14.358 14:53:59 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:14.358 14:53:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.358 14:53:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:14.358 14:53:59 -- common/autotest_common.sh@10 -- # set +x 00:13:14.615 14:54:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:14.615 14:54:00 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:14.615 14:54:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.615 14:54:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:14.615 14:54:00 -- common/autotest_common.sh@10 -- # set +x 00:13:15.180 14:54:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.180 14:54:00 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:15.180 14:54:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.180 14:54:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.180 14:54:00 -- common/autotest_common.sh@10 -- # set +x 00:13:15.437 14:54:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.437 14:54:00 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:15.437 14:54:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.437 14:54:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.437 14:54:00 -- common/autotest_common.sh@10 -- # set +x 00:13:15.695 14:54:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.695 14:54:01 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:15.695 14:54:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.695 14:54:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.695 14:54:01 -- common/autotest_common.sh@10 -- # set +x 00:13:15.952 14:54:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.952 14:54:01 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:15.952 14:54:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.952 14:54:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.952 14:54:01 -- common/autotest_common.sh@10 -- # set +x 00:13:16.208 14:54:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.208 14:54:01 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:16.208 14:54:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.208 14:54:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.208 14:54:01 -- common/autotest_common.sh@10 -- # set +x 00:13:16.773 14:54:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.773 14:54:02 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:16.773 14:54:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.773 14:54:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.773 14:54:02 -- common/autotest_common.sh@10 -- # set +x 00:13:17.031 14:54:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.031 14:54:02 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:17.031 14:54:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.031 14:54:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.031 14:54:02 -- common/autotest_common.sh@10 -- # set +x 00:13:17.288 14:54:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.288 14:54:02 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:17.288 14:54:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.288 14:54:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.288 14:54:02 -- common/autotest_common.sh@10 -- # set +x 00:13:17.546 14:54:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.546 14:54:03 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:17.546 14:54:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.546 14:54:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.546 14:54:03 -- common/autotest_common.sh@10 -- # set +x 00:13:17.803 14:54:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.803 14:54:03 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:17.803 14:54:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.803 14:54:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.803 14:54:03 -- common/autotest_common.sh@10 -- # set +x 00:13:18.367 14:54:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:18.367 14:54:03 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:18.367 14:54:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.367 14:54:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:18.367 14:54:03 -- common/autotest_common.sh@10 -- # set +x 00:13:18.624 14:54:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:18.624 14:54:04 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:18.624 14:54:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.624 14:54:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:18.624 14:54:04 -- common/autotest_common.sh@10 -- # set +x 00:13:18.882 14:54:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:18.882 14:54:04 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:18.882 14:54:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.882 14:54:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:18.882 14:54:04 -- common/autotest_common.sh@10 -- # set +x 00:13:19.138 14:54:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:19.138 14:54:04 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:19.138 14:54:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.138 14:54:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:19.138 14:54:04 -- common/autotest_common.sh@10 -- # set +x 00:13:19.394 14:54:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:19.395 14:54:05 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:19.395 14:54:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.395 14:54:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:19.395 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:13:19.957 14:54:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:19.957 14:54:05 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:19.957 14:54:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.957 14:54:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:19.957 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:13:20.214 14:54:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:20.214 14:54:05 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:20.214 14:54:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.214 14:54:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:20.214 14:54:05 -- common/autotest_common.sh@10 -- # set +x 00:13:20.471 14:54:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:20.471 14:54:06 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:20.471 14:54:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.471 14:54:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:20.471 14:54:06 -- common/autotest_common.sh@10 -- # set +x 00:13:20.729 14:54:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:20.729 14:54:06 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:20.729 14:54:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.729 14:54:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:20.729 14:54:06 -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 14:54:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:21.294 14:54:06 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:21.294 14:54:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.294 14:54:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:21.294 14:54:06 -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:21.551 14:54:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:21.551 14:54:07 -- target/connect_stress.sh@34 -- # kill -0 3727859 00:13:21.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3727859) - No such process 00:13:21.552 14:54:07 -- target/connect_stress.sh@38 -- # wait 3727859 00:13:21.552 14:54:07 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:21.552 14:54:07 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:21.552 14:54:07 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:21.552 14:54:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:21.552 14:54:07 -- nvmf/common.sh@117 -- # sync 00:13:21.552 14:54:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:21.552 14:54:07 -- nvmf/common.sh@120 -- # set +e 00:13:21.552 14:54:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:21.552 14:54:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:21.552 rmmod nvme_tcp 00:13:21.552 rmmod nvme_fabrics 00:13:21.552 rmmod nvme_keyring 00:13:21.552 14:54:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:21.552 14:54:07 -- nvmf/common.sh@124 -- # set -e 00:13:21.552 14:54:07 -- nvmf/common.sh@125 -- # return 0 00:13:21.552 14:54:07 -- nvmf/common.sh@478 -- # '[' -n 3727833 ']' 00:13:21.552 14:54:07 -- nvmf/common.sh@479 -- # killprocess 3727833 00:13:21.552 14:54:07 -- common/autotest_common.sh@936 -- # '[' -z 3727833 ']' 00:13:21.552 14:54:07 -- common/autotest_common.sh@940 -- # kill -0 3727833 00:13:21.552 14:54:07 -- common/autotest_common.sh@941 -- # uname 00:13:21.552 14:54:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:21.552 14:54:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3727833 00:13:21.552 14:54:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:21.552 14:54:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:21.552 14:54:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3727833' 00:13:21.552 killing process with pid 3727833 00:13:21.552 14:54:07 -- common/autotest_common.sh@955 -- # kill 3727833 00:13:21.552 14:54:07 -- common/autotest_common.sh@960 -- # wait 3727833 00:13:21.811 14:54:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:21.811 14:54:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:21.811 14:54:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:21.811 14:54:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:21.811 14:54:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:21.811 14:54:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.811 14:54:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.811 14:54:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.718 14:54:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:23.718 00:13:23.718 real 0m15.393s 00:13:23.718 user 0m38.019s 00:13:23.718 sys 0m6.335s 00:13:23.718 14:54:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:23.718 14:54:09 -- common/autotest_common.sh@10 -- # set +x 00:13:23.718 ************************************ 00:13:23.718 END TEST nvmf_connect_stress 00:13:23.718 ************************************ 00:13:23.718 14:54:09 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:23.718 14:54:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:23.718 14:54:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:23.718 14:54:09 -- common/autotest_common.sh@10 -- # set +x 00:13:23.976 ************************************ 00:13:23.976 START TEST nvmf_fused_ordering 00:13:23.976 ************************************ 00:13:23.976 14:54:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:23.976 * Looking for test storage... 00:13:23.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.976 14:54:09 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.976 14:54:09 -- nvmf/common.sh@7 -- # uname -s 00:13:23.976 14:54:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.976 14:54:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.976 14:54:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.976 14:54:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.976 14:54:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.976 14:54:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.976 14:54:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.976 14:54:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.976 14:54:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.976 14:54:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.976 14:54:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:23.976 14:54:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:23.976 14:54:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.976 14:54:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.976 14:54:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.976 14:54:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.976 14:54:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.976 14:54:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.976 14:54:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.976 14:54:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.976 14:54:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.976 14:54:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.976 14:54:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.976 14:54:09 -- paths/export.sh@5 -- # export PATH 00:13:23.976 14:54:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.976 14:54:09 -- nvmf/common.sh@47 -- # : 0 00:13:23.976 14:54:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:23.976 14:54:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:23.976 14:54:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.976 14:54:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.976 14:54:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.976 14:54:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:23.976 14:54:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:23.976 14:54:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:23.976 14:54:09 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:23.976 14:54:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:23.976 14:54:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.976 14:54:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:23.976 14:54:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:23.976 14:54:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:23.976 14:54:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.976 14:54:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.976 14:54:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.976 14:54:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:23.976 14:54:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:23.976 14:54:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:23.976 14:54:09 -- common/autotest_common.sh@10 -- # set +x 00:13:25.877 14:54:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:25.877 14:54:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:25.877 14:54:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:25.877 14:54:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:25.877 14:54:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:25.877 14:54:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:25.877 14:54:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:25.877 14:54:11 -- nvmf/common.sh@295 -- # net_devs=() 00:13:25.877 14:54:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:25.877 14:54:11 -- nvmf/common.sh@296 -- # e810=() 00:13:25.877 14:54:11 -- nvmf/common.sh@296 -- # local -ga e810 00:13:25.877 14:54:11 -- nvmf/common.sh@297 -- # x722=() 00:13:25.877 14:54:11 -- nvmf/common.sh@297 -- # local -ga x722 00:13:25.877 14:54:11 -- nvmf/common.sh@298 -- # mlx=() 00:13:25.877 14:54:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:25.877 14:54:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:25.877 14:54:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:25.877 14:54:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:25.877 14:54:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:25.877 14:54:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:25.877 14:54:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:25.877 14:54:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:25.877 14:54:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:25.877 14:54:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:25.877 14:54:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:25.877 14:54:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:25.877 14:54:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:25.877 14:54:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:25.877 14:54:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:25.877 14:54:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:25.877 14:54:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:25.877 14:54:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:25.877 14:54:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:25.877 14:54:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:25.877 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:25.877 14:54:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:25.877 14:54:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:25.877 14:54:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.877 14:54:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.877 14:54:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:25.877 14:54:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:25.877 14:54:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:25.877 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:25.877 14:54:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:25.877 14:54:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:25.877 14:54:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.877 14:54:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.877 14:54:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:25.877 14:54:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:25.877 14:54:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:25.877 14:54:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:25.877 14:54:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:25.877 14:54:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.877 14:54:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:25.877 14:54:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.877 14:54:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:25.877 Found net devices under 0000:84:00.0: cvl_0_0 00:13:25.877 14:54:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.877 14:54:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:25.877 14:54:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.877 14:54:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:25.877 14:54:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.877 14:54:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:25.877 Found net devices under 0000:84:00.1: cvl_0_1 00:13:25.877 14:54:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.877 14:54:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:25.877 14:54:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:25.877 14:54:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:25.877 14:54:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:25.877 14:54:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:25.877 14:54:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.877 14:54:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.877 14:54:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:25.877 14:54:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:25.877 14:54:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:25.877 14:54:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:25.877 14:54:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:25.877 14:54:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:25.877 14:54:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.877 14:54:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:25.877 14:54:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:25.877 14:54:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:25.877 14:54:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.135 14:54:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.135 14:54:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.135 14:54:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:26.135 14:54:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.135 14:54:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.135 14:54:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.135 14:54:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:26.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:13:26.135 00:13:26.135 --- 10.0.0.2 ping statistics --- 00:13:26.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.135 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:13:26.135 14:54:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:13:26.135 00:13:26.135 --- 10.0.0.1 ping statistics --- 00:13:26.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.135 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:13:26.135 14:54:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.135 14:54:11 -- nvmf/common.sh@411 -- # return 0 00:13:26.135 14:54:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:26.135 14:54:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.135 14:54:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:26.135 14:54:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:26.135 14:54:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.135 14:54:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:26.135 14:54:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:26.135 14:54:11 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:26.135 14:54:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:26.135 14:54:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:26.135 14:54:11 -- common/autotest_common.sh@10 -- # set +x 00:13:26.135 14:54:11 -- nvmf/common.sh@470 -- # nvmfpid=3731719 00:13:26.135 14:54:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:26.135 14:54:11 -- nvmf/common.sh@471 -- # waitforlisten 3731719 00:13:26.135 14:54:11 -- common/autotest_common.sh@817 -- # '[' -z 3731719 ']' 00:13:26.135 14:54:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.135 14:54:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:26.135 14:54:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.136 14:54:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:26.136 14:54:11 -- common/autotest_common.sh@10 -- # set +x 00:13:26.136 [2024-04-26 14:54:11.780479] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:13:26.136 [2024-04-26 14:54:11.780565] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.136 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.136 [2024-04-26 14:54:11.818239] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:26.136 [2024-04-26 14:54:11.850161] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.394 [2024-04-26 14:54:11.938637] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.395 [2024-04-26 14:54:11.938704] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.395 [2024-04-26 14:54:11.938732] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.395 [2024-04-26 14:54:11.938747] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.395 [2024-04-26 14:54:11.938759] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.395 [2024-04-26 14:54:11.938802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.395 14:54:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:26.395 14:54:12 -- common/autotest_common.sh@850 -- # return 0 00:13:26.395 14:54:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:26.395 14:54:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:26.395 14:54:12 -- common/autotest_common.sh@10 -- # set +x 00:13:26.395 14:54:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.395 14:54:12 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:26.395 14:54:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:26.395 14:54:12 -- common/autotest_common.sh@10 -- # set +x 00:13:26.395 [2024-04-26 14:54:12.088654] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.395 14:54:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:26.395 14:54:12 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:26.395 14:54:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:26.395 14:54:12 -- common/autotest_common.sh@10 -- # set +x 00:13:26.395 14:54:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:26.395 14:54:12 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.395 14:54:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:26.395 14:54:12 -- common/autotest_common.sh@10 -- # set +x 00:13:26.395 [2024-04-26 14:54:12.104891] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.395 14:54:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:26.395 14:54:12 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:26.395 14:54:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:26.395 14:54:12 -- common/autotest_common.sh@10 -- # set +x 00:13:26.395 NULL1 00:13:26.395 14:54:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:26.395 14:54:12 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:26.395 14:54:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:26.395 14:54:12 -- common/autotest_common.sh@10 -- # set +x 00:13:26.395 14:54:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:26.395 14:54:12 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:26.395 14:54:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:26.395 14:54:12 -- common/autotest_common.sh@10 -- # set +x 00:13:26.395 14:54:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:26.395 14:54:12 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:26.653 [2024-04-26 14:54:12.150767] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:13:26.653 [2024-04-26 14:54:12.150811] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731781 ] 00:13:26.653 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.653 [2024-04-26 14:54:12.184783] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:26.911 Attached to nqn.2016-06.io.spdk:cnode1 00:13:26.911 Namespace ID: 1 size: 1GB 00:13:26.911 fused_ordering(0) 00:13:26.911 fused_ordering(1) 00:13:26.911 fused_ordering(2) 00:13:26.911 fused_ordering(3) 00:13:26.912 fused_ordering(4) 00:13:26.912 fused_ordering(5) 00:13:26.912 fused_ordering(6) 00:13:26.912 fused_ordering(7) 00:13:26.912 fused_ordering(8) 00:13:26.912 fused_ordering(9) 00:13:26.912 fused_ordering(10) 00:13:26.912 fused_ordering(11) 00:13:26.912 fused_ordering(12) 00:13:26.912 fused_ordering(13) 00:13:26.912 fused_ordering(14) 00:13:26.912 fused_ordering(15) 00:13:26.912 fused_ordering(16) 00:13:26.912 fused_ordering(17) 00:13:26.912 fused_ordering(18) 00:13:26.912 fused_ordering(19) 00:13:26.912 fused_ordering(20) 00:13:26.912 fused_ordering(21) 00:13:26.912 fused_ordering(22) 00:13:26.912 fused_ordering(23) 00:13:26.912 fused_ordering(24) 00:13:26.912 fused_ordering(25) 00:13:26.912 fused_ordering(26) 00:13:26.912 fused_ordering(27) 00:13:26.912 fused_ordering(28) 00:13:26.912 fused_ordering(29) 00:13:26.912 fused_ordering(30) 00:13:26.912 fused_ordering(31) 00:13:26.912 fused_ordering(32) 00:13:26.912 fused_ordering(33) 00:13:26.912 fused_ordering(34) 00:13:26.912 fused_ordering(35) 00:13:26.912 fused_ordering(36) 00:13:26.912 fused_ordering(37) 00:13:26.912 fused_ordering(38) 00:13:26.912 fused_ordering(39) 00:13:26.912 fused_ordering(40) 00:13:26.912 fused_ordering(41) 00:13:26.912 fused_ordering(42) 00:13:26.912 fused_ordering(43) 00:13:26.912 fused_ordering(44) 00:13:26.912 fused_ordering(45) 00:13:26.912 fused_ordering(46) 00:13:26.912 fused_ordering(47) 00:13:26.912 fused_ordering(48) 00:13:26.912 fused_ordering(49) 00:13:26.912 fused_ordering(50) 00:13:26.912 fused_ordering(51) 00:13:26.912 fused_ordering(52) 00:13:26.912 fused_ordering(53) 00:13:26.912 fused_ordering(54) 00:13:26.912 fused_ordering(55) 00:13:26.912 fused_ordering(56) 00:13:26.912 fused_ordering(57) 00:13:26.912 fused_ordering(58) 00:13:26.912 fused_ordering(59) 00:13:26.912 fused_ordering(60) 00:13:26.912 fused_ordering(61) 00:13:26.912 fused_ordering(62) 00:13:26.912 fused_ordering(63) 00:13:26.912 fused_ordering(64) 00:13:26.912 fused_ordering(65) 00:13:26.912 fused_ordering(66) 00:13:26.912 fused_ordering(67) 00:13:26.912 fused_ordering(68) 00:13:26.912 fused_ordering(69) 00:13:26.912 fused_ordering(70) 00:13:26.912 fused_ordering(71) 00:13:26.912 fused_ordering(72) 00:13:26.912 fused_ordering(73) 00:13:26.912 fused_ordering(74) 00:13:26.912 fused_ordering(75) 00:13:26.912 fused_ordering(76) 00:13:26.912 fused_ordering(77) 00:13:26.912 fused_ordering(78) 00:13:26.912 fused_ordering(79) 00:13:26.912 fused_ordering(80) 00:13:26.912 fused_ordering(81) 00:13:26.912 fused_ordering(82) 00:13:26.912 fused_ordering(83) 00:13:26.912 fused_ordering(84) 00:13:26.912 fused_ordering(85) 00:13:26.912 fused_ordering(86) 00:13:26.912 fused_ordering(87) 00:13:26.912 fused_ordering(88) 00:13:26.912 fused_ordering(89) 00:13:26.912 fused_ordering(90) 00:13:26.912 fused_ordering(91) 00:13:26.912 fused_ordering(92) 00:13:26.912 fused_ordering(93) 00:13:26.912 fused_ordering(94) 00:13:26.912 fused_ordering(95) 00:13:26.912 fused_ordering(96) 00:13:26.912 fused_ordering(97) 00:13:26.912 fused_ordering(98) 00:13:26.912 fused_ordering(99) 00:13:26.912 fused_ordering(100) 00:13:26.912 fused_ordering(101) 00:13:26.912 fused_ordering(102) 00:13:26.912 fused_ordering(103) 00:13:26.912 fused_ordering(104) 00:13:26.912 fused_ordering(105) 00:13:26.912 fused_ordering(106) 00:13:26.912 fused_ordering(107) 00:13:26.912 fused_ordering(108) 00:13:26.912 fused_ordering(109) 00:13:26.912 fused_ordering(110) 00:13:26.912 fused_ordering(111) 00:13:26.912 fused_ordering(112) 00:13:26.912 fused_ordering(113) 00:13:26.912 fused_ordering(114) 00:13:26.912 fused_ordering(115) 00:13:26.912 fused_ordering(116) 00:13:26.912 fused_ordering(117) 00:13:26.912 fused_ordering(118) 00:13:26.912 fused_ordering(119) 00:13:26.912 fused_ordering(120) 00:13:26.912 fused_ordering(121) 00:13:26.912 fused_ordering(122) 00:13:26.912 fused_ordering(123) 00:13:26.912 fused_ordering(124) 00:13:26.912 fused_ordering(125) 00:13:26.912 fused_ordering(126) 00:13:26.912 fused_ordering(127) 00:13:26.912 fused_ordering(128) 00:13:26.912 fused_ordering(129) 00:13:26.912 fused_ordering(130) 00:13:26.912 fused_ordering(131) 00:13:26.912 fused_ordering(132) 00:13:26.912 fused_ordering(133) 00:13:26.912 fused_ordering(134) 00:13:26.912 fused_ordering(135) 00:13:26.912 fused_ordering(136) 00:13:26.912 fused_ordering(137) 00:13:26.912 fused_ordering(138) 00:13:26.912 fused_ordering(139) 00:13:26.912 fused_ordering(140) 00:13:26.912 fused_ordering(141) 00:13:26.912 fused_ordering(142) 00:13:26.912 fused_ordering(143) 00:13:26.912 fused_ordering(144) 00:13:26.912 fused_ordering(145) 00:13:26.912 fused_ordering(146) 00:13:26.912 fused_ordering(147) 00:13:26.912 fused_ordering(148) 00:13:26.912 fused_ordering(149) 00:13:26.912 fused_ordering(150) 00:13:26.912 fused_ordering(151) 00:13:26.912 fused_ordering(152) 00:13:26.912 fused_ordering(153) 00:13:26.912 fused_ordering(154) 00:13:26.912 fused_ordering(155) 00:13:26.912 fused_ordering(156) 00:13:26.912 fused_ordering(157) 00:13:26.912 fused_ordering(158) 00:13:26.912 fused_ordering(159) 00:13:26.912 fused_ordering(160) 00:13:26.912 fused_ordering(161) 00:13:26.912 fused_ordering(162) 00:13:26.912 fused_ordering(163) 00:13:26.912 fused_ordering(164) 00:13:26.912 fused_ordering(165) 00:13:26.912 fused_ordering(166) 00:13:26.912 fused_ordering(167) 00:13:26.912 fused_ordering(168) 00:13:26.912 fused_ordering(169) 00:13:26.912 fused_ordering(170) 00:13:26.912 fused_ordering(171) 00:13:26.912 fused_ordering(172) 00:13:26.912 fused_ordering(173) 00:13:26.912 fused_ordering(174) 00:13:26.912 fused_ordering(175) 00:13:26.912 fused_ordering(176) 00:13:26.912 fused_ordering(177) 00:13:26.912 fused_ordering(178) 00:13:26.912 fused_ordering(179) 00:13:26.912 fused_ordering(180) 00:13:26.912 fused_ordering(181) 00:13:26.912 fused_ordering(182) 00:13:26.912 fused_ordering(183) 00:13:26.912 fused_ordering(184) 00:13:26.912 fused_ordering(185) 00:13:26.912 fused_ordering(186) 00:13:26.912 fused_ordering(187) 00:13:26.912 fused_ordering(188) 00:13:26.912 fused_ordering(189) 00:13:26.912 fused_ordering(190) 00:13:26.912 fused_ordering(191) 00:13:26.912 fused_ordering(192) 00:13:26.912 fused_ordering(193) 00:13:26.912 fused_ordering(194) 00:13:26.912 fused_ordering(195) 00:13:26.912 fused_ordering(196) 00:13:26.912 fused_ordering(197) 00:13:26.912 fused_ordering(198) 00:13:26.912 fused_ordering(199) 00:13:26.912 fused_ordering(200) 00:13:26.912 fused_ordering(201) 00:13:26.912 fused_ordering(202) 00:13:26.912 fused_ordering(203) 00:13:26.912 fused_ordering(204) 00:13:26.912 fused_ordering(205) 00:13:27.530 fused_ordering(206) 00:13:27.530 fused_ordering(207) 00:13:27.530 fused_ordering(208) 00:13:27.530 fused_ordering(209) 00:13:27.530 fused_ordering(210) 00:13:27.530 fused_ordering(211) 00:13:27.530 fused_ordering(212) 00:13:27.530 fused_ordering(213) 00:13:27.530 fused_ordering(214) 00:13:27.530 fused_ordering(215) 00:13:27.530 fused_ordering(216) 00:13:27.530 fused_ordering(217) 00:13:27.530 fused_ordering(218) 00:13:27.530 fused_ordering(219) 00:13:27.530 fused_ordering(220) 00:13:27.530 fused_ordering(221) 00:13:27.530 fused_ordering(222) 00:13:27.530 fused_ordering(223) 00:13:27.530 fused_ordering(224) 00:13:27.530 fused_ordering(225) 00:13:27.530 fused_ordering(226) 00:13:27.530 fused_ordering(227) 00:13:27.530 fused_ordering(228) 00:13:27.530 fused_ordering(229) 00:13:27.530 fused_ordering(230) 00:13:27.530 fused_ordering(231) 00:13:27.530 fused_ordering(232) 00:13:27.530 fused_ordering(233) 00:13:27.530 fused_ordering(234) 00:13:27.530 fused_ordering(235) 00:13:27.530 fused_ordering(236) 00:13:27.530 fused_ordering(237) 00:13:27.530 fused_ordering(238) 00:13:27.530 fused_ordering(239) 00:13:27.530 fused_ordering(240) 00:13:27.530 fused_ordering(241) 00:13:27.530 fused_ordering(242) 00:13:27.530 fused_ordering(243) 00:13:27.530 fused_ordering(244) 00:13:27.530 fused_ordering(245) 00:13:27.530 fused_ordering(246) 00:13:27.530 fused_ordering(247) 00:13:27.530 fused_ordering(248) 00:13:27.530 fused_ordering(249) 00:13:27.530 fused_ordering(250) 00:13:27.530 fused_ordering(251) 00:13:27.530 fused_ordering(252) 00:13:27.530 fused_ordering(253) 00:13:27.530 fused_ordering(254) 00:13:27.530 fused_ordering(255) 00:13:27.530 fused_ordering(256) 00:13:27.530 fused_ordering(257) 00:13:27.530 fused_ordering(258) 00:13:27.530 fused_ordering(259) 00:13:27.530 fused_ordering(260) 00:13:27.530 fused_ordering(261) 00:13:27.530 fused_ordering(262) 00:13:27.530 fused_ordering(263) 00:13:27.530 fused_ordering(264) 00:13:27.530 fused_ordering(265) 00:13:27.530 fused_ordering(266) 00:13:27.530 fused_ordering(267) 00:13:27.530 fused_ordering(268) 00:13:27.530 fused_ordering(269) 00:13:27.530 fused_ordering(270) 00:13:27.530 fused_ordering(271) 00:13:27.530 fused_ordering(272) 00:13:27.530 fused_ordering(273) 00:13:27.530 fused_ordering(274) 00:13:27.530 fused_ordering(275) 00:13:27.530 fused_ordering(276) 00:13:27.530 fused_ordering(277) 00:13:27.530 fused_ordering(278) 00:13:27.530 fused_ordering(279) 00:13:27.530 fused_ordering(280) 00:13:27.530 fused_ordering(281) 00:13:27.530 fused_ordering(282) 00:13:27.530 fused_ordering(283) 00:13:27.530 fused_ordering(284) 00:13:27.530 fused_ordering(285) 00:13:27.530 fused_ordering(286) 00:13:27.530 fused_ordering(287) 00:13:27.530 fused_ordering(288) 00:13:27.530 fused_ordering(289) 00:13:27.530 fused_ordering(290) 00:13:27.530 fused_ordering(291) 00:13:27.530 fused_ordering(292) 00:13:27.530 fused_ordering(293) 00:13:27.530 fused_ordering(294) 00:13:27.530 fused_ordering(295) 00:13:27.530 fused_ordering(296) 00:13:27.530 fused_ordering(297) 00:13:27.530 fused_ordering(298) 00:13:27.530 fused_ordering(299) 00:13:27.530 fused_ordering(300) 00:13:27.530 fused_ordering(301) 00:13:27.530 fused_ordering(302) 00:13:27.530 fused_ordering(303) 00:13:27.530 fused_ordering(304) 00:13:27.530 fused_ordering(305) 00:13:27.530 fused_ordering(306) 00:13:27.530 fused_ordering(307) 00:13:27.530 fused_ordering(308) 00:13:27.530 fused_ordering(309) 00:13:27.530 fused_ordering(310) 00:13:27.530 fused_ordering(311) 00:13:27.530 fused_ordering(312) 00:13:27.530 fused_ordering(313) 00:13:27.530 fused_ordering(314) 00:13:27.530 fused_ordering(315) 00:13:27.530 fused_ordering(316) 00:13:27.530 fused_ordering(317) 00:13:27.530 fused_ordering(318) 00:13:27.530 fused_ordering(319) 00:13:27.530 fused_ordering(320) 00:13:27.530 fused_ordering(321) 00:13:27.530 fused_ordering(322) 00:13:27.530 fused_ordering(323) 00:13:27.530 fused_ordering(324) 00:13:27.530 fused_ordering(325) 00:13:27.530 fused_ordering(326) 00:13:27.530 fused_ordering(327) 00:13:27.530 fused_ordering(328) 00:13:27.530 fused_ordering(329) 00:13:27.530 fused_ordering(330) 00:13:27.530 fused_ordering(331) 00:13:27.530 fused_ordering(332) 00:13:27.530 fused_ordering(333) 00:13:27.530 fused_ordering(334) 00:13:27.530 fused_ordering(335) 00:13:27.530 fused_ordering(336) 00:13:27.530 fused_ordering(337) 00:13:27.530 fused_ordering(338) 00:13:27.530 fused_ordering(339) 00:13:27.530 fused_ordering(340) 00:13:27.530 fused_ordering(341) 00:13:27.530 fused_ordering(342) 00:13:27.530 fused_ordering(343) 00:13:27.530 fused_ordering(344) 00:13:27.530 fused_ordering(345) 00:13:27.530 fused_ordering(346) 00:13:27.530 fused_ordering(347) 00:13:27.530 fused_ordering(348) 00:13:27.530 fused_ordering(349) 00:13:27.530 fused_ordering(350) 00:13:27.530 fused_ordering(351) 00:13:27.530 fused_ordering(352) 00:13:27.530 fused_ordering(353) 00:13:27.530 fused_ordering(354) 00:13:27.530 fused_ordering(355) 00:13:27.530 fused_ordering(356) 00:13:27.530 fused_ordering(357) 00:13:27.530 fused_ordering(358) 00:13:27.530 fused_ordering(359) 00:13:27.530 fused_ordering(360) 00:13:27.530 fused_ordering(361) 00:13:27.530 fused_ordering(362) 00:13:27.530 fused_ordering(363) 00:13:27.530 fused_ordering(364) 00:13:27.530 fused_ordering(365) 00:13:27.530 fused_ordering(366) 00:13:27.530 fused_ordering(367) 00:13:27.530 fused_ordering(368) 00:13:27.530 fused_ordering(369) 00:13:27.530 fused_ordering(370) 00:13:27.530 fused_ordering(371) 00:13:27.530 fused_ordering(372) 00:13:27.530 fused_ordering(373) 00:13:27.530 fused_ordering(374) 00:13:27.530 fused_ordering(375) 00:13:27.530 fused_ordering(376) 00:13:27.530 fused_ordering(377) 00:13:27.530 fused_ordering(378) 00:13:27.530 fused_ordering(379) 00:13:27.530 fused_ordering(380) 00:13:27.530 fused_ordering(381) 00:13:27.530 fused_ordering(382) 00:13:27.530 fused_ordering(383) 00:13:27.530 fused_ordering(384) 00:13:27.530 fused_ordering(385) 00:13:27.530 fused_ordering(386) 00:13:27.530 fused_ordering(387) 00:13:27.530 fused_ordering(388) 00:13:27.530 fused_ordering(389) 00:13:27.530 fused_ordering(390) 00:13:27.530 fused_ordering(391) 00:13:27.530 fused_ordering(392) 00:13:27.530 fused_ordering(393) 00:13:27.530 fused_ordering(394) 00:13:27.530 fused_ordering(395) 00:13:27.530 fused_ordering(396) 00:13:27.530 fused_ordering(397) 00:13:27.530 fused_ordering(398) 00:13:27.530 fused_ordering(399) 00:13:27.530 fused_ordering(400) 00:13:27.530 fused_ordering(401) 00:13:27.530 fused_ordering(402) 00:13:27.530 fused_ordering(403) 00:13:27.530 fused_ordering(404) 00:13:27.530 fused_ordering(405) 00:13:27.530 fused_ordering(406) 00:13:27.530 fused_ordering(407) 00:13:27.530 fused_ordering(408) 00:13:27.531 fused_ordering(409) 00:13:27.531 fused_ordering(410) 00:13:27.788 fused_ordering(411) 00:13:27.788 fused_ordering(412) 00:13:27.788 fused_ordering(413) 00:13:27.788 fused_ordering(414) 00:13:27.788 fused_ordering(415) 00:13:27.788 fused_ordering(416) 00:13:27.788 fused_ordering(417) 00:13:27.788 fused_ordering(418) 00:13:27.788 fused_ordering(419) 00:13:27.788 fused_ordering(420) 00:13:27.788 fused_ordering(421) 00:13:27.788 fused_ordering(422) 00:13:27.788 fused_ordering(423) 00:13:27.788 fused_ordering(424) 00:13:27.788 fused_ordering(425) 00:13:27.788 fused_ordering(426) 00:13:27.788 fused_ordering(427) 00:13:27.788 fused_ordering(428) 00:13:27.788 fused_ordering(429) 00:13:27.788 fused_ordering(430) 00:13:27.788 fused_ordering(431) 00:13:27.788 fused_ordering(432) 00:13:27.788 fused_ordering(433) 00:13:27.788 fused_ordering(434) 00:13:27.788 fused_ordering(435) 00:13:27.788 fused_ordering(436) 00:13:27.788 fused_ordering(437) 00:13:27.788 fused_ordering(438) 00:13:27.788 fused_ordering(439) 00:13:27.788 fused_ordering(440) 00:13:27.788 fused_ordering(441) 00:13:27.788 fused_ordering(442) 00:13:27.788 fused_ordering(443) 00:13:27.788 fused_ordering(444) 00:13:27.788 fused_ordering(445) 00:13:27.788 fused_ordering(446) 00:13:27.788 fused_ordering(447) 00:13:27.788 fused_ordering(448) 00:13:27.788 fused_ordering(449) 00:13:27.788 fused_ordering(450) 00:13:27.788 fused_ordering(451) 00:13:27.788 fused_ordering(452) 00:13:27.788 fused_ordering(453) 00:13:27.788 fused_ordering(454) 00:13:27.788 fused_ordering(455) 00:13:27.788 fused_ordering(456) 00:13:27.788 fused_ordering(457) 00:13:27.788 fused_ordering(458) 00:13:27.788 fused_ordering(459) 00:13:27.788 fused_ordering(460) 00:13:27.788 fused_ordering(461) 00:13:27.788 fused_ordering(462) 00:13:27.788 fused_ordering(463) 00:13:27.788 fused_ordering(464) 00:13:27.788 fused_ordering(465) 00:13:27.788 fused_ordering(466) 00:13:27.788 fused_ordering(467) 00:13:27.788 fused_ordering(468) 00:13:27.788 fused_ordering(469) 00:13:27.788 fused_ordering(470) 00:13:27.788 fused_ordering(471) 00:13:27.788 fused_ordering(472) 00:13:27.788 fused_ordering(473) 00:13:27.788 fused_ordering(474) 00:13:27.788 fused_ordering(475) 00:13:27.788 fused_ordering(476) 00:13:27.788 fused_ordering(477) 00:13:27.788 fused_ordering(478) 00:13:27.788 fused_ordering(479) 00:13:27.788 fused_ordering(480) 00:13:27.788 fused_ordering(481) 00:13:27.788 fused_ordering(482) 00:13:27.788 fused_ordering(483) 00:13:27.788 fused_ordering(484) 00:13:27.788 fused_ordering(485) 00:13:27.788 fused_ordering(486) 00:13:27.788 fused_ordering(487) 00:13:27.788 fused_ordering(488) 00:13:27.788 fused_ordering(489) 00:13:27.788 fused_ordering(490) 00:13:27.788 fused_ordering(491) 00:13:27.788 fused_ordering(492) 00:13:27.788 fused_ordering(493) 00:13:27.788 fused_ordering(494) 00:13:27.788 fused_ordering(495) 00:13:27.788 fused_ordering(496) 00:13:27.788 fused_ordering(497) 00:13:27.788 fused_ordering(498) 00:13:27.788 fused_ordering(499) 00:13:27.788 fused_ordering(500) 00:13:27.788 fused_ordering(501) 00:13:27.788 fused_ordering(502) 00:13:27.788 fused_ordering(503) 00:13:27.788 fused_ordering(504) 00:13:27.788 fused_ordering(505) 00:13:27.788 fused_ordering(506) 00:13:27.788 fused_ordering(507) 00:13:27.788 fused_ordering(508) 00:13:27.788 fused_ordering(509) 00:13:27.788 fused_ordering(510) 00:13:27.788 fused_ordering(511) 00:13:27.788 fused_ordering(512) 00:13:27.788 fused_ordering(513) 00:13:27.788 fused_ordering(514) 00:13:27.788 fused_ordering(515) 00:13:27.788 fused_ordering(516) 00:13:27.788 fused_ordering(517) 00:13:27.788 fused_ordering(518) 00:13:27.788 fused_ordering(519) 00:13:27.788 fused_ordering(520) 00:13:27.788 fused_ordering(521) 00:13:27.788 fused_ordering(522) 00:13:27.788 fused_ordering(523) 00:13:27.788 fused_ordering(524) 00:13:27.788 fused_ordering(525) 00:13:27.788 fused_ordering(526) 00:13:27.788 fused_ordering(527) 00:13:27.788 fused_ordering(528) 00:13:27.788 fused_ordering(529) 00:13:27.788 fused_ordering(530) 00:13:27.788 fused_ordering(531) 00:13:27.788 fused_ordering(532) 00:13:27.788 fused_ordering(533) 00:13:27.788 fused_ordering(534) 00:13:27.788 fused_ordering(535) 00:13:27.788 fused_ordering(536) 00:13:27.788 fused_ordering(537) 00:13:27.788 fused_ordering(538) 00:13:27.788 fused_ordering(539) 00:13:27.788 fused_ordering(540) 00:13:27.788 fused_ordering(541) 00:13:27.788 fused_ordering(542) 00:13:27.788 fused_ordering(543) 00:13:27.788 fused_ordering(544) 00:13:27.788 fused_ordering(545) 00:13:27.788 fused_ordering(546) 00:13:27.788 fused_ordering(547) 00:13:27.788 fused_ordering(548) 00:13:27.788 fused_ordering(549) 00:13:27.788 fused_ordering(550) 00:13:27.788 fused_ordering(551) 00:13:27.788 fused_ordering(552) 00:13:27.788 fused_ordering(553) 00:13:27.788 fused_ordering(554) 00:13:27.788 fused_ordering(555) 00:13:27.788 fused_ordering(556) 00:13:27.788 fused_ordering(557) 00:13:27.788 fused_ordering(558) 00:13:27.788 fused_ordering(559) 00:13:27.788 fused_ordering(560) 00:13:27.788 fused_ordering(561) 00:13:27.788 fused_ordering(562) 00:13:27.788 fused_ordering(563) 00:13:27.788 fused_ordering(564) 00:13:27.788 fused_ordering(565) 00:13:27.788 fused_ordering(566) 00:13:27.788 fused_ordering(567) 00:13:27.788 fused_ordering(568) 00:13:27.788 fused_ordering(569) 00:13:27.788 fused_ordering(570) 00:13:27.788 fused_ordering(571) 00:13:27.788 fused_ordering(572) 00:13:27.788 fused_ordering(573) 00:13:27.788 fused_ordering(574) 00:13:27.788 fused_ordering(575) 00:13:27.788 fused_ordering(576) 00:13:27.788 fused_ordering(577) 00:13:27.788 fused_ordering(578) 00:13:27.788 fused_ordering(579) 00:13:27.788 fused_ordering(580) 00:13:27.788 fused_ordering(581) 00:13:27.788 fused_ordering(582) 00:13:27.788 fused_ordering(583) 00:13:27.788 fused_ordering(584) 00:13:27.788 fused_ordering(585) 00:13:27.788 fused_ordering(586) 00:13:27.788 fused_ordering(587) 00:13:27.788 fused_ordering(588) 00:13:27.788 fused_ordering(589) 00:13:27.788 fused_ordering(590) 00:13:27.788 fused_ordering(591) 00:13:27.788 fused_ordering(592) 00:13:27.788 fused_ordering(593) 00:13:27.788 fused_ordering(594) 00:13:27.788 fused_ordering(595) 00:13:27.788 fused_ordering(596) 00:13:27.788 fused_ordering(597) 00:13:27.788 fused_ordering(598) 00:13:27.788 fused_ordering(599) 00:13:27.788 fused_ordering(600) 00:13:27.788 fused_ordering(601) 00:13:27.788 fused_ordering(602) 00:13:27.788 fused_ordering(603) 00:13:27.788 fused_ordering(604) 00:13:27.788 fused_ordering(605) 00:13:27.788 fused_ordering(606) 00:13:27.788 fused_ordering(607) 00:13:27.788 fused_ordering(608) 00:13:27.788 fused_ordering(609) 00:13:27.788 fused_ordering(610) 00:13:27.788 fused_ordering(611) 00:13:27.788 fused_ordering(612) 00:13:27.788 fused_ordering(613) 00:13:27.788 fused_ordering(614) 00:13:27.788 fused_ordering(615) 00:13:28.720 fused_ordering(616) 00:13:28.720 fused_ordering(617) 00:13:28.720 fused_ordering(618) 00:13:28.720 fused_ordering(619) 00:13:28.720 fused_ordering(620) 00:13:28.720 fused_ordering(621) 00:13:28.720 fused_ordering(622) 00:13:28.720 fused_ordering(623) 00:13:28.720 fused_ordering(624) 00:13:28.720 fused_ordering(625) 00:13:28.720 fused_ordering(626) 00:13:28.720 fused_ordering(627) 00:13:28.720 fused_ordering(628) 00:13:28.720 fused_ordering(629) 00:13:28.720 fused_ordering(630) 00:13:28.720 fused_ordering(631) 00:13:28.720 fused_ordering(632) 00:13:28.720 fused_ordering(633) 00:13:28.720 fused_ordering(634) 00:13:28.720 fused_ordering(635) 00:13:28.720 fused_ordering(636) 00:13:28.720 fused_ordering(637) 00:13:28.720 fused_ordering(638) 00:13:28.720 fused_ordering(639) 00:13:28.720 fused_ordering(640) 00:13:28.720 fused_ordering(641) 00:13:28.720 fused_ordering(642) 00:13:28.720 fused_ordering(643) 00:13:28.720 fused_ordering(644) 00:13:28.720 fused_ordering(645) 00:13:28.720 fused_ordering(646) 00:13:28.720 fused_ordering(647) 00:13:28.720 fused_ordering(648) 00:13:28.720 fused_ordering(649) 00:13:28.720 fused_ordering(650) 00:13:28.720 fused_ordering(651) 00:13:28.720 fused_ordering(652) 00:13:28.720 fused_ordering(653) 00:13:28.720 fused_ordering(654) 00:13:28.720 fused_ordering(655) 00:13:28.720 fused_ordering(656) 00:13:28.720 fused_ordering(657) 00:13:28.720 fused_ordering(658) 00:13:28.720 fused_ordering(659) 00:13:28.720 fused_ordering(660) 00:13:28.720 fused_ordering(661) 00:13:28.720 fused_ordering(662) 00:13:28.720 fused_ordering(663) 00:13:28.720 fused_ordering(664) 00:13:28.720 fused_ordering(665) 00:13:28.720 fused_ordering(666) 00:13:28.720 fused_ordering(667) 00:13:28.720 fused_ordering(668) 00:13:28.720 fused_ordering(669) 00:13:28.720 fused_ordering(670) 00:13:28.720 fused_ordering(671) 00:13:28.720 fused_ordering(672) 00:13:28.720 fused_ordering(673) 00:13:28.720 fused_ordering(674) 00:13:28.720 fused_ordering(675) 00:13:28.720 fused_ordering(676) 00:13:28.720 fused_ordering(677) 00:13:28.720 fused_ordering(678) 00:13:28.720 fused_ordering(679) 00:13:28.720 fused_ordering(680) 00:13:28.720 fused_ordering(681) 00:13:28.720 fused_ordering(682) 00:13:28.720 fused_ordering(683) 00:13:28.720 fused_ordering(684) 00:13:28.720 fused_ordering(685) 00:13:28.720 fused_ordering(686) 00:13:28.720 fused_ordering(687) 00:13:28.720 fused_ordering(688) 00:13:28.720 fused_ordering(689) 00:13:28.720 fused_ordering(690) 00:13:28.720 fused_ordering(691) 00:13:28.720 fused_ordering(692) 00:13:28.720 fused_ordering(693) 00:13:28.720 fused_ordering(694) 00:13:28.720 fused_ordering(695) 00:13:28.720 fused_ordering(696) 00:13:28.720 fused_ordering(697) 00:13:28.720 fused_ordering(698) 00:13:28.720 fused_ordering(699) 00:13:28.720 fused_ordering(700) 00:13:28.720 fused_ordering(701) 00:13:28.720 fused_ordering(702) 00:13:28.720 fused_ordering(703) 00:13:28.720 fused_ordering(704) 00:13:28.720 fused_ordering(705) 00:13:28.720 fused_ordering(706) 00:13:28.720 fused_ordering(707) 00:13:28.720 fused_ordering(708) 00:13:28.720 fused_ordering(709) 00:13:28.720 fused_ordering(710) 00:13:28.720 fused_ordering(711) 00:13:28.720 fused_ordering(712) 00:13:28.720 fused_ordering(713) 00:13:28.720 fused_ordering(714) 00:13:28.720 fused_ordering(715) 00:13:28.720 fused_ordering(716) 00:13:28.720 fused_ordering(717) 00:13:28.720 fused_ordering(718) 00:13:28.720 fused_ordering(719) 00:13:28.720 fused_ordering(720) 00:13:28.720 fused_ordering(721) 00:13:28.720 fused_ordering(722) 00:13:28.720 fused_ordering(723) 00:13:28.720 fused_ordering(724) 00:13:28.720 fused_ordering(725) 00:13:28.720 fused_ordering(726) 00:13:28.720 fused_ordering(727) 00:13:28.720 fused_ordering(728) 00:13:28.720 fused_ordering(729) 00:13:28.720 fused_ordering(730) 00:13:28.720 fused_ordering(731) 00:13:28.720 fused_ordering(732) 00:13:28.720 fused_ordering(733) 00:13:28.720 fused_ordering(734) 00:13:28.720 fused_ordering(735) 00:13:28.720 fused_ordering(736) 00:13:28.720 fused_ordering(737) 00:13:28.720 fused_ordering(738) 00:13:28.720 fused_ordering(739) 00:13:28.720 fused_ordering(740) 00:13:28.720 fused_ordering(741) 00:13:28.720 fused_ordering(742) 00:13:28.720 fused_ordering(743) 00:13:28.720 fused_ordering(744) 00:13:28.720 fused_ordering(745) 00:13:28.720 fused_ordering(746) 00:13:28.720 fused_ordering(747) 00:13:28.720 fused_ordering(748) 00:13:28.720 fused_ordering(749) 00:13:28.720 fused_ordering(750) 00:13:28.720 fused_ordering(751) 00:13:28.720 fused_ordering(752) 00:13:28.720 fused_ordering(753) 00:13:28.720 fused_ordering(754) 00:13:28.720 fused_ordering(755) 00:13:28.720 fused_ordering(756) 00:13:28.720 fused_ordering(757) 00:13:28.720 fused_ordering(758) 00:13:28.720 fused_ordering(759) 00:13:28.720 fused_ordering(760) 00:13:28.720 fused_ordering(761) 00:13:28.720 fused_ordering(762) 00:13:28.720 fused_ordering(763) 00:13:28.720 fused_ordering(764) 00:13:28.720 fused_ordering(765) 00:13:28.720 fused_ordering(766) 00:13:28.720 fused_ordering(767) 00:13:28.720 fused_ordering(768) 00:13:28.720 fused_ordering(769) 00:13:28.720 fused_ordering(770) 00:13:28.720 fused_ordering(771) 00:13:28.721 fused_ordering(772) 00:13:28.721 fused_ordering(773) 00:13:28.721 fused_ordering(774) 00:13:28.721 fused_ordering(775) 00:13:28.721 fused_ordering(776) 00:13:28.721 fused_ordering(777) 00:13:28.721 fused_ordering(778) 00:13:28.721 fused_ordering(779) 00:13:28.721 fused_ordering(780) 00:13:28.721 fused_ordering(781) 00:13:28.721 fused_ordering(782) 00:13:28.721 fused_ordering(783) 00:13:28.721 fused_ordering(784) 00:13:28.721 fused_ordering(785) 00:13:28.721 fused_ordering(786) 00:13:28.721 fused_ordering(787) 00:13:28.721 fused_ordering(788) 00:13:28.721 fused_ordering(789) 00:13:28.721 fused_ordering(790) 00:13:28.721 fused_ordering(791) 00:13:28.721 fused_ordering(792) 00:13:28.721 fused_ordering(793) 00:13:28.721 fused_ordering(794) 00:13:28.721 fused_ordering(795) 00:13:28.721 fused_ordering(796) 00:13:28.721 fused_ordering(797) 00:13:28.721 fused_ordering(798) 00:13:28.721 fused_ordering(799) 00:13:28.721 fused_ordering(800) 00:13:28.721 fused_ordering(801) 00:13:28.721 fused_ordering(802) 00:13:28.721 fused_ordering(803) 00:13:28.721 fused_ordering(804) 00:13:28.721 fused_ordering(805) 00:13:28.721 fused_ordering(806) 00:13:28.721 fused_ordering(807) 00:13:28.721 fused_ordering(808) 00:13:28.721 fused_ordering(809) 00:13:28.721 fused_ordering(810) 00:13:28.721 fused_ordering(811) 00:13:28.721 fused_ordering(812) 00:13:28.721 fused_ordering(813) 00:13:28.721 fused_ordering(814) 00:13:28.721 fused_ordering(815) 00:13:28.721 fused_ordering(816) 00:13:28.721 fused_ordering(817) 00:13:28.721 fused_ordering(818) 00:13:28.721 fused_ordering(819) 00:13:28.721 fused_ordering(820) 00:13:29.287 fused_ordering(821) 00:13:29.287 fused_ordering(822) 00:13:29.287 fused_ordering(823) 00:13:29.287 fused_ordering(824) 00:13:29.287 fused_ordering(825) 00:13:29.287 fused_ordering(826) 00:13:29.287 fused_ordering(827) 00:13:29.287 fused_ordering(828) 00:13:29.287 fused_ordering(829) 00:13:29.287 fused_ordering(830) 00:13:29.287 fused_ordering(831) 00:13:29.287 fused_ordering(832) 00:13:29.287 fused_ordering(833) 00:13:29.287 fused_ordering(834) 00:13:29.287 fused_ordering(835) 00:13:29.287 fused_ordering(836) 00:13:29.287 fused_ordering(837) 00:13:29.287 fused_ordering(838) 00:13:29.287 fused_ordering(839) 00:13:29.287 fused_ordering(840) 00:13:29.287 fused_ordering(841) 00:13:29.287 fused_ordering(842) 00:13:29.287 fused_ordering(843) 00:13:29.287 fused_ordering(844) 00:13:29.287 fused_ordering(845) 00:13:29.287 fused_ordering(846) 00:13:29.287 fused_ordering(847) 00:13:29.287 fused_ordering(848) 00:13:29.287 fused_ordering(849) 00:13:29.287 fused_ordering(850) 00:13:29.287 fused_ordering(851) 00:13:29.287 fused_ordering(852) 00:13:29.287 fused_ordering(853) 00:13:29.287 fused_ordering(854) 00:13:29.287 fused_ordering(855) 00:13:29.287 fused_ordering(856) 00:13:29.287 fused_ordering(857) 00:13:29.287 fused_ordering(858) 00:13:29.287 fused_ordering(859) 00:13:29.287 fused_ordering(860) 00:13:29.287 fused_ordering(861) 00:13:29.287 fused_ordering(862) 00:13:29.287 fused_ordering(863) 00:13:29.287 fused_ordering(864) 00:13:29.287 fused_ordering(865) 00:13:29.287 fused_ordering(866) 00:13:29.287 fused_ordering(867) 00:13:29.287 fused_ordering(868) 00:13:29.287 fused_ordering(869) 00:13:29.287 fused_ordering(870) 00:13:29.287 fused_ordering(871) 00:13:29.287 fused_ordering(872) 00:13:29.287 fused_ordering(873) 00:13:29.287 fused_ordering(874) 00:13:29.287 fused_ordering(875) 00:13:29.287 fused_ordering(876) 00:13:29.287 fused_ordering(877) 00:13:29.287 fused_ordering(878) 00:13:29.287 fused_ordering(879) 00:13:29.287 fused_ordering(880) 00:13:29.287 fused_ordering(881) 00:13:29.287 fused_ordering(882) 00:13:29.287 fused_ordering(883) 00:13:29.287 fused_ordering(884) 00:13:29.287 fused_ordering(885) 00:13:29.287 fused_ordering(886) 00:13:29.287 fused_ordering(887) 00:13:29.287 fused_ordering(888) 00:13:29.287 fused_ordering(889) 00:13:29.287 fused_ordering(890) 00:13:29.287 fused_ordering(891) 00:13:29.287 fused_ordering(892) 00:13:29.287 fused_ordering(893) 00:13:29.287 fused_ordering(894) 00:13:29.287 fused_ordering(895) 00:13:29.287 fused_ordering(896) 00:13:29.287 fused_ordering(897) 00:13:29.287 fused_ordering(898) 00:13:29.287 fused_ordering(899) 00:13:29.287 fused_ordering(900) 00:13:29.287 fused_ordering(901) 00:13:29.287 fused_ordering(902) 00:13:29.287 fused_ordering(903) 00:13:29.287 fused_ordering(904) 00:13:29.287 fused_ordering(905) 00:13:29.287 fused_ordering(906) 00:13:29.287 fused_ordering(907) 00:13:29.287 fused_ordering(908) 00:13:29.287 fused_ordering(909) 00:13:29.287 fused_ordering(910) 00:13:29.287 fused_ordering(911) 00:13:29.287 fused_ordering(912) 00:13:29.287 fused_ordering(913) 00:13:29.287 fused_ordering(914) 00:13:29.287 fused_ordering(915) 00:13:29.287 fused_ordering(916) 00:13:29.287 fused_ordering(917) 00:13:29.287 fused_ordering(918) 00:13:29.287 fused_ordering(919) 00:13:29.287 fused_ordering(920) 00:13:29.287 fused_ordering(921) 00:13:29.287 fused_ordering(922) 00:13:29.287 fused_ordering(923) 00:13:29.287 fused_ordering(924) 00:13:29.287 fused_ordering(925) 00:13:29.287 fused_ordering(926) 00:13:29.287 fused_ordering(927) 00:13:29.287 fused_ordering(928) 00:13:29.287 fused_ordering(929) 00:13:29.287 fused_ordering(930) 00:13:29.287 fused_ordering(931) 00:13:29.287 fused_ordering(932) 00:13:29.287 fused_ordering(933) 00:13:29.287 fused_ordering(934) 00:13:29.287 fused_ordering(935) 00:13:29.287 fused_ordering(936) 00:13:29.287 fused_ordering(937) 00:13:29.287 fused_ordering(938) 00:13:29.287 fused_ordering(939) 00:13:29.287 fused_ordering(940) 00:13:29.287 fused_ordering(941) 00:13:29.287 fused_ordering(942) 00:13:29.287 fused_ordering(943) 00:13:29.287 fused_ordering(944) 00:13:29.287 fused_ordering(945) 00:13:29.287 fused_ordering(946) 00:13:29.287 fused_ordering(947) 00:13:29.287 fused_ordering(948) 00:13:29.287 fused_ordering(949) 00:13:29.287 fused_ordering(950) 00:13:29.287 fused_ordering(951) 00:13:29.287 fused_ordering(952) 00:13:29.287 fused_ordering(953) 00:13:29.287 fused_ordering(954) 00:13:29.287 fused_ordering(955) 00:13:29.287 fused_ordering(956) 00:13:29.287 fused_ordering(957) 00:13:29.287 fused_ordering(958) 00:13:29.287 fused_ordering(959) 00:13:29.287 fused_ordering(960) 00:13:29.287 fused_ordering(961) 00:13:29.287 fused_ordering(962) 00:13:29.287 fused_ordering(963) 00:13:29.287 fused_ordering(964) 00:13:29.287 fused_ordering(965) 00:13:29.287 fused_ordering(966) 00:13:29.287 fused_ordering(967) 00:13:29.287 fused_ordering(968) 00:13:29.287 fused_ordering(969) 00:13:29.287 fused_ordering(970) 00:13:29.287 fused_ordering(971) 00:13:29.287 fused_ordering(972) 00:13:29.287 fused_ordering(973) 00:13:29.287 fused_ordering(974) 00:13:29.287 fused_ordering(975) 00:13:29.287 fused_ordering(976) 00:13:29.287 fused_ordering(977) 00:13:29.287 fused_ordering(978) 00:13:29.287 fused_ordering(979) 00:13:29.287 fused_ordering(980) 00:13:29.287 fused_ordering(981) 00:13:29.287 fused_ordering(982) 00:13:29.287 fused_ordering(983) 00:13:29.287 fused_ordering(984) 00:13:29.287 fused_ordering(985) 00:13:29.287 fused_ordering(986) 00:13:29.287 fused_ordering(987) 00:13:29.287 fused_ordering(988) 00:13:29.287 fused_ordering(989) 00:13:29.287 fused_ordering(990) 00:13:29.287 fused_ordering(991) 00:13:29.287 fused_ordering(992) 00:13:29.287 fused_ordering(993) 00:13:29.287 fused_ordering(994) 00:13:29.287 fused_ordering(995) 00:13:29.287 fused_ordering(996) 00:13:29.287 fused_ordering(997) 00:13:29.287 fused_ordering(998) 00:13:29.287 fused_ordering(999) 00:13:29.287 fused_ordering(1000) 00:13:29.287 fused_ordering(1001) 00:13:29.287 fused_ordering(1002) 00:13:29.287 fused_ordering(1003) 00:13:29.287 fused_ordering(1004) 00:13:29.287 fused_ordering(1005) 00:13:29.287 fused_ordering(1006) 00:13:29.287 fused_ordering(1007) 00:13:29.287 fused_ordering(1008) 00:13:29.287 fused_ordering(1009) 00:13:29.287 fused_ordering(1010) 00:13:29.287 fused_ordering(1011) 00:13:29.287 fused_ordering(1012) 00:13:29.287 fused_ordering(1013) 00:13:29.287 fused_ordering(1014) 00:13:29.287 fused_ordering(1015) 00:13:29.287 fused_ordering(1016) 00:13:29.287 fused_ordering(1017) 00:13:29.287 fused_ordering(1018) 00:13:29.287 fused_ordering(1019) 00:13:29.287 fused_ordering(1020) 00:13:29.287 fused_ordering(1021) 00:13:29.287 fused_ordering(1022) 00:13:29.287 fused_ordering(1023) 00:13:29.287 14:54:14 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:29.287 14:54:14 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:29.287 14:54:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:29.287 14:54:14 -- nvmf/common.sh@117 -- # sync 00:13:29.287 14:54:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:29.287 14:54:14 -- nvmf/common.sh@120 -- # set +e 00:13:29.287 14:54:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:29.287 14:54:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:29.287 rmmod nvme_tcp 00:13:29.287 rmmod nvme_fabrics 00:13:29.287 rmmod nvme_keyring 00:13:29.287 14:54:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:29.287 14:54:14 -- nvmf/common.sh@124 -- # set -e 00:13:29.287 14:54:14 -- nvmf/common.sh@125 -- # return 0 00:13:29.287 14:54:14 -- nvmf/common.sh@478 -- # '[' -n 3731719 ']' 00:13:29.287 14:54:14 -- nvmf/common.sh@479 -- # killprocess 3731719 00:13:29.287 14:54:14 -- common/autotest_common.sh@936 -- # '[' -z 3731719 ']' 00:13:29.288 14:54:14 -- common/autotest_common.sh@940 -- # kill -0 3731719 00:13:29.288 14:54:14 -- common/autotest_common.sh@941 -- # uname 00:13:29.288 14:54:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:29.288 14:54:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3731719 00:13:29.288 14:54:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:29.288 14:54:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:29.288 14:54:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3731719' 00:13:29.288 killing process with pid 3731719 00:13:29.288 14:54:15 -- common/autotest_common.sh@955 -- # kill 3731719 00:13:29.288 14:54:15 -- common/autotest_common.sh@960 -- # wait 3731719 00:13:29.546 14:54:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:29.546 14:54:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:29.546 14:54:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:29.546 14:54:15 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:29.546 14:54:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:29.546 14:54:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.546 14:54:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.546 14:54:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.080 14:54:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:32.080 00:13:32.080 real 0m7.752s 00:13:32.080 user 0m5.201s 00:13:32.080 sys 0m3.604s 00:13:32.080 14:54:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:32.080 14:54:17 -- common/autotest_common.sh@10 -- # set +x 00:13:32.080 ************************************ 00:13:32.080 END TEST nvmf_fused_ordering 00:13:32.080 ************************************ 00:13:32.080 14:54:17 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:32.080 14:54:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:32.080 14:54:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:32.080 14:54:17 -- common/autotest_common.sh@10 -- # set +x 00:13:32.080 ************************************ 00:13:32.080 START TEST nvmf_delete_subsystem 00:13:32.080 ************************************ 00:13:32.080 14:54:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:32.080 * Looking for test storage... 00:13:32.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.080 14:54:17 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.080 14:54:17 -- nvmf/common.sh@7 -- # uname -s 00:13:32.080 14:54:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.080 14:54:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.080 14:54:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.080 14:54:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.080 14:54:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.080 14:54:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.080 14:54:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.080 14:54:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.080 14:54:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.080 14:54:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.080 14:54:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:32.080 14:54:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:32.080 14:54:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.080 14:54:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.080 14:54:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.080 14:54:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.080 14:54:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.080 14:54:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.080 14:54:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.080 14:54:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.080 14:54:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.080 14:54:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.080 14:54:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.080 14:54:17 -- paths/export.sh@5 -- # export PATH 00:13:32.080 14:54:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.080 14:54:17 -- nvmf/common.sh@47 -- # : 0 00:13:32.080 14:54:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:32.080 14:54:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:32.080 14:54:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.080 14:54:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.080 14:54:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.080 14:54:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:32.080 14:54:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:32.080 14:54:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:32.080 14:54:17 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:32.080 14:54:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:32.080 14:54:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.080 14:54:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:32.080 14:54:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:32.080 14:54:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:32.080 14:54:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.080 14:54:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.080 14:54:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.080 14:54:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:32.080 14:54:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:32.080 14:54:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:32.080 14:54:17 -- common/autotest_common.sh@10 -- # set +x 00:13:33.980 14:54:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:33.980 14:54:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:33.980 14:54:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:33.980 14:54:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:33.980 14:54:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:33.980 14:54:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:33.980 14:54:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:33.980 14:54:19 -- nvmf/common.sh@295 -- # net_devs=() 00:13:33.980 14:54:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:33.980 14:54:19 -- nvmf/common.sh@296 -- # e810=() 00:13:33.980 14:54:19 -- nvmf/common.sh@296 -- # local -ga e810 00:13:33.980 14:54:19 -- nvmf/common.sh@297 -- # x722=() 00:13:33.980 14:54:19 -- nvmf/common.sh@297 -- # local -ga x722 00:13:33.980 14:54:19 -- nvmf/common.sh@298 -- # mlx=() 00:13:33.980 14:54:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:33.980 14:54:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.980 14:54:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.980 14:54:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.980 14:54:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.980 14:54:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.980 14:54:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.980 14:54:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.980 14:54:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.980 14:54:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.980 14:54:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.980 14:54:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.980 14:54:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:33.980 14:54:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:33.981 14:54:19 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:33.981 14:54:19 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:33.981 14:54:19 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:33.981 14:54:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:33.981 14:54:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.981 14:54:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:33.981 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:33.981 14:54:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.981 14:54:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.981 14:54:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.981 14:54:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.981 14:54:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.981 14:54:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.981 14:54:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:33.981 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:33.981 14:54:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.981 14:54:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.981 14:54:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.981 14:54:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.981 14:54:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.981 14:54:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:33.981 14:54:19 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:33.981 14:54:19 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:33.981 14:54:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.981 14:54:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.981 14:54:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:33.981 14:54:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.981 14:54:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:33.981 Found net devices under 0000:84:00.0: cvl_0_0 00:13:33.981 14:54:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.981 14:54:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.981 14:54:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.981 14:54:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:33.981 14:54:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.981 14:54:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:33.981 Found net devices under 0000:84:00.1: cvl_0_1 00:13:33.981 14:54:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.981 14:54:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:33.981 14:54:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:33.981 14:54:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:33.981 14:54:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:33.981 14:54:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:33.981 14:54:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.981 14:54:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.981 14:54:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:33.981 14:54:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:33.981 14:54:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:33.981 14:54:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:33.981 14:54:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:33.981 14:54:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:33.981 14:54:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.981 14:54:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:33.981 14:54:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:33.981 14:54:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:33.981 14:54:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:33.981 14:54:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:33.981 14:54:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:33.981 14:54:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:33.981 14:54:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:33.981 14:54:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:33.981 14:54:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:33.981 14:54:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:33.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:13:33.981 00:13:33.981 --- 10.0.0.2 ping statistics --- 00:13:33.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.981 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:13:33.981 14:54:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:33.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:13:33.981 00:13:33.981 --- 10.0.0.1 ping statistics --- 00:13:33.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.981 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:13:33.981 14:54:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.981 14:54:19 -- nvmf/common.sh@411 -- # return 0 00:13:33.981 14:54:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:33.981 14:54:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.981 14:54:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:33.981 14:54:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:33.981 14:54:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.981 14:54:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:33.981 14:54:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:33.981 14:54:19 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:33.981 14:54:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:33.981 14:54:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:33.981 14:54:19 -- common/autotest_common.sh@10 -- # set +x 00:13:33.981 14:54:19 -- nvmf/common.sh@470 -- # nvmfpid=3734047 00:13:33.981 14:54:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:33.981 14:54:19 -- nvmf/common.sh@471 -- # waitforlisten 3734047 00:13:33.981 14:54:19 -- common/autotest_common.sh@817 -- # '[' -z 3734047 ']' 00:13:33.981 14:54:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.981 14:54:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:33.981 14:54:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.981 14:54:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:33.981 14:54:19 -- common/autotest_common.sh@10 -- # set +x 00:13:33.981 [2024-04-26 14:54:19.682468] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:13:33.981 [2024-04-26 14:54:19.682562] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.981 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.240 [2024-04-26 14:54:19.722688] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:34.240 [2024-04-26 14:54:19.750263] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:34.240 [2024-04-26 14:54:19.833182] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.240 [2024-04-26 14:54:19.833244] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.240 [2024-04-26 14:54:19.833273] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.240 [2024-04-26 14:54:19.833285] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.240 [2024-04-26 14:54:19.833295] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.240 [2024-04-26 14:54:19.833423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.240 [2024-04-26 14:54:19.833428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.240 14:54:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:34.240 14:54:19 -- common/autotest_common.sh@850 -- # return 0 00:13:34.240 14:54:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:34.240 14:54:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:34.240 14:54:19 -- common/autotest_common.sh@10 -- # set +x 00:13:34.240 14:54:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.240 14:54:19 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:34.240 14:54:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.240 14:54:19 -- common/autotest_common.sh@10 -- # set +x 00:13:34.240 [2024-04-26 14:54:19.963183] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.240 14:54:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.240 14:54:19 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:34.240 14:54:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.240 14:54:19 -- common/autotest_common.sh@10 -- # set +x 00:13:34.240 14:54:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.240 14:54:19 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.240 14:54:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.240 14:54:19 -- common/autotest_common.sh@10 -- # set +x 00:13:34.240 [2024-04-26 14:54:19.979391] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.499 14:54:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.499 14:54:19 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:34.499 14:54:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.499 14:54:19 -- common/autotest_common.sh@10 -- # set +x 00:13:34.499 NULL1 00:13:34.499 14:54:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.499 14:54:19 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:34.499 14:54:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.499 14:54:19 -- common/autotest_common.sh@10 -- # set +x 00:13:34.499 Delay0 00:13:34.499 14:54:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.499 14:54:19 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.499 14:54:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.499 14:54:19 -- common/autotest_common.sh@10 -- # set +x 00:13:34.499 14:54:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.499 14:54:20 -- target/delete_subsystem.sh@28 -- # perf_pid=3734157 00:13:34.499 14:54:20 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:34.499 14:54:20 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:34.499 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.499 [2024-04-26 14:54:20.054130] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:36.400 14:54:22 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.400 14:54:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.400 14:54:22 -- common/autotest_common.sh@10 -- # set +x 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 starting I/O failed: -6 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 starting I/O failed: -6 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 starting I/O failed: -6 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 starting I/O failed: -6 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 starting I/O failed: -6 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 starting I/O failed: -6 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 starting I/O failed: -6 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 starting I/O failed: -6 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 starting I/O failed: -6 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 starting I/O failed: -6 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 starting I/O failed: -6 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 [2024-04-26 14:54:22.146190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eabac0 is same with the state(5) to be set 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 [2024-04-26 14:54:22.147189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93ea0 is same with the state(5) to be set 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 starting I/O failed: -6 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Write completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 starting I/O failed: -6 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.658 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 starting I/O failed: -6 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 starting I/O failed: -6 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 starting I/O failed: -6 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 starting I/O failed: -6 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 starting I/O failed: -6 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 starting I/O failed: -6 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 starting I/O failed: -6 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 starting I/O failed: -6 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 [2024-04-26 14:54:22.147679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbf04000c00 is same with the state(5) to be set 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Write completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:36.659 Read completed with error (sct=0, sc=8) 00:13:37.592 [2024-04-26 14:54:23.115585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93220 is same with the state(5) to be set 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Write completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Write completed with error (sct=0, sc=8) 00:13:37.592 Write completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Write completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Write completed with error (sct=0, sc=8) 00:13:37.592 Write completed with error (sct=0, sc=8) 00:13:37.592 [2024-04-26 14:54:23.147878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbf0400c510 is same with the state(5) to be set 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Write completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Write completed with error (sct=0, sc=8) 00:13:37.592 Write completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Write completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Write completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 [2024-04-26 14:54:23.148066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbf0400bf90 is same with the state(5) to be set 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Write completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Write completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Write completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Write completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Read completed with error (sct=0, sc=8) 00:13:37.592 Write completed with error (sct=0, sc=8) 00:13:37.592 [2024-04-26 14:54:23.149711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eabc50 is same with the state(5) to be set 00:13:37.593 Read completed with error (sct=0, sc=8) 00:13:37.593 Read completed with error (sct=0, sc=8) 00:13:37.593 Read completed with error (sct=0, sc=8) 00:13:37.593 Read completed with error (sct=0, sc=8) 00:13:37.593 Read completed with error (sct=0, sc=8) 00:13:37.593 Read completed with error (sct=0, sc=8) 00:13:37.593 Write completed with error (sct=0, sc=8) 00:13:37.593 Write completed with error (sct=0, sc=8) 00:13:37.593 Read completed with error (sct=0, sc=8) 00:13:37.593 Read completed with error (sct=0, sc=8) 00:13:37.593 Read completed with error (sct=0, sc=8) 00:13:37.593 Write completed with error (sct=0, sc=8) 00:13:37.593 Read completed with error (sct=0, sc=8) 00:13:37.593 Write completed with error (sct=0, sc=8) 00:13:37.593 Write completed with error (sct=0, sc=8) 00:13:37.593 Read completed with error (sct=0, sc=8) 00:13:37.593 Read completed with error (sct=0, sc=8) 00:13:37.593 Read completed with error (sct=0, sc=8) 00:13:37.593 Read completed with error (sct=0, sc=8) 00:13:37.593 Read completed with error (sct=0, sc=8) 00:13:37.593 [2024-04-26 14:54:23.151051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8da50 is same with the state(5) to be set 00:13:37.593 [2024-04-26 14:54:23.151912] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e93220 (9): Bad file descriptor 00:13:37.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:37.593 14:54:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:37.593 14:54:23 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:37.593 14:54:23 -- target/delete_subsystem.sh@35 -- # kill -0 3734157 00:13:37.593 14:54:23 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:37.593 Initializing NVMe Controllers 00:13:37.593 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:37.593 Controller IO queue size 128, less than required. 00:13:37.593 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:37.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:37.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:37.593 Initialization complete. Launching workers. 00:13:37.593 ======================================================== 00:13:37.593 Latency(us) 00:13:37.593 Device Information : IOPS MiB/s Average min max 00:13:37.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.19 0.08 916788.34 1029.53 1045948.17 00:13:37.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.75 0.08 935202.42 448.61 1046149.74 00:13:37.593 ======================================================== 00:13:37.593 Total : 316.95 0.15 925779.25 448.61 1046149.74 00:13:37.593 00:13:38.158 14:54:23 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:38.158 14:54:23 -- target/delete_subsystem.sh@35 -- # kill -0 3734157 00:13:38.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3734157) - No such process 00:13:38.158 14:54:23 -- target/delete_subsystem.sh@45 -- # NOT wait 3734157 00:13:38.158 14:54:23 -- common/autotest_common.sh@638 -- # local es=0 00:13:38.158 14:54:23 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 3734157 00:13:38.158 14:54:23 -- common/autotest_common.sh@626 -- # local arg=wait 00:13:38.158 14:54:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:38.158 14:54:23 -- common/autotest_common.sh@630 -- # type -t wait 00:13:38.158 14:54:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:38.158 14:54:23 -- common/autotest_common.sh@641 -- # wait 3734157 00:13:38.158 14:54:23 -- common/autotest_common.sh@641 -- # es=1 00:13:38.158 14:54:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:38.158 14:54:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:38.158 14:54:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:38.158 14:54:23 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:38.158 14:54:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:38.158 14:54:23 -- common/autotest_common.sh@10 -- # set +x 00:13:38.158 14:54:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:38.158 14:54:23 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.158 14:54:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:38.158 14:54:23 -- common/autotest_common.sh@10 -- # set +x 00:13:38.158 [2024-04-26 14:54:23.675048] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.158 14:54:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:38.158 14:54:23 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.158 14:54:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:38.158 14:54:23 -- common/autotest_common.sh@10 -- # set +x 00:13:38.158 14:54:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:38.158 14:54:23 -- target/delete_subsystem.sh@54 -- # perf_pid=3734561 00:13:38.158 14:54:23 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:38.158 14:54:23 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:38.158 14:54:23 -- target/delete_subsystem.sh@57 -- # kill -0 3734561 00:13:38.158 14:54:23 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:38.158 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.158 [2024-04-26 14:54:23.736089] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:38.723 14:54:24 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:38.723 14:54:24 -- target/delete_subsystem.sh@57 -- # kill -0 3734561 00:13:38.723 14:54:24 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:38.981 14:54:24 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:38.981 14:54:24 -- target/delete_subsystem.sh@57 -- # kill -0 3734561 00:13:38.981 14:54:24 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:39.545 14:54:25 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:39.545 14:54:25 -- target/delete_subsystem.sh@57 -- # kill -0 3734561 00:13:39.545 14:54:25 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:40.110 14:54:25 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:40.110 14:54:25 -- target/delete_subsystem.sh@57 -- # kill -0 3734561 00:13:40.110 14:54:25 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:40.673 14:54:26 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:40.673 14:54:26 -- target/delete_subsystem.sh@57 -- # kill -0 3734561 00:13:40.673 14:54:26 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:41.237 14:54:26 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:41.237 14:54:26 -- target/delete_subsystem.sh@57 -- # kill -0 3734561 00:13:41.237 14:54:26 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:41.237 Initializing NVMe Controllers 00:13:41.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:41.237 Controller IO queue size 128, less than required. 00:13:41.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:41.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:41.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:41.237 Initialization complete. Launching workers. 00:13:41.237 ======================================================== 00:13:41.237 Latency(us) 00:13:41.237 Device Information : IOPS MiB/s Average min max 00:13:41.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004289.09 1000208.66 1012632.66 00:13:41.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004041.04 1000162.18 1043222.55 00:13:41.237 ======================================================== 00:13:41.237 Total : 256.00 0.12 1004165.07 1000162.18 1043222.55 00:13:41.237 00:13:41.494 14:54:27 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:41.494 14:54:27 -- target/delete_subsystem.sh@57 -- # kill -0 3734561 00:13:41.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3734561) - No such process 00:13:41.495 14:54:27 -- target/delete_subsystem.sh@67 -- # wait 3734561 00:13:41.495 14:54:27 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:41.495 14:54:27 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:41.495 14:54:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:41.495 14:54:27 -- nvmf/common.sh@117 -- # sync 00:13:41.495 14:54:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:41.495 14:54:27 -- nvmf/common.sh@120 -- # set +e 00:13:41.495 14:54:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:41.495 14:54:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:41.495 rmmod nvme_tcp 00:13:41.495 rmmod nvme_fabrics 00:13:41.753 rmmod nvme_keyring 00:13:41.753 14:54:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:41.753 14:54:27 -- nvmf/common.sh@124 -- # set -e 00:13:41.753 14:54:27 -- nvmf/common.sh@125 -- # return 0 00:13:41.753 14:54:27 -- nvmf/common.sh@478 -- # '[' -n 3734047 ']' 00:13:41.753 14:54:27 -- nvmf/common.sh@479 -- # killprocess 3734047 00:13:41.753 14:54:27 -- common/autotest_common.sh@936 -- # '[' -z 3734047 ']' 00:13:41.753 14:54:27 -- common/autotest_common.sh@940 -- # kill -0 3734047 00:13:41.753 14:54:27 -- common/autotest_common.sh@941 -- # uname 00:13:41.753 14:54:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:41.753 14:54:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3734047 00:13:41.753 14:54:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:41.753 14:54:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:41.753 14:54:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3734047' 00:13:41.753 killing process with pid 3734047 00:13:41.753 14:54:27 -- common/autotest_common.sh@955 -- # kill 3734047 00:13:41.753 14:54:27 -- common/autotest_common.sh@960 -- # wait 3734047 00:13:42.012 14:54:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:42.012 14:54:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:42.012 14:54:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:42.012 14:54:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:42.012 14:54:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:42.012 14:54:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.012 14:54:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.012 14:54:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.954 14:54:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:43.954 00:13:43.954 real 0m12.138s 00:13:43.954 user 0m27.377s 00:13:43.954 sys 0m2.980s 00:13:43.954 14:54:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:43.954 14:54:29 -- common/autotest_common.sh@10 -- # set +x 00:13:43.954 ************************************ 00:13:43.954 END TEST nvmf_delete_subsystem 00:13:43.954 ************************************ 00:13:43.954 14:54:29 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:43.954 14:54:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:43.954 14:54:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:43.954 14:54:29 -- common/autotest_common.sh@10 -- # set +x 00:13:43.954 ************************************ 00:13:43.954 START TEST nvmf_ns_masking 00:13:43.954 ************************************ 00:13:43.954 14:54:29 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:44.212 * Looking for test storage... 00:13:44.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.212 14:54:29 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.212 14:54:29 -- nvmf/common.sh@7 -- # uname -s 00:13:44.212 14:54:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.212 14:54:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.212 14:54:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.212 14:54:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.212 14:54:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.212 14:54:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.212 14:54:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.212 14:54:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.212 14:54:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.212 14:54:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.212 14:54:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:44.212 14:54:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:44.212 14:54:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.212 14:54:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.212 14:54:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.212 14:54:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.212 14:54:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:44.212 14:54:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.212 14:54:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.212 14:54:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.212 14:54:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.212 14:54:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.213 14:54:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.213 14:54:29 -- paths/export.sh@5 -- # export PATH 00:13:44.213 14:54:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.213 14:54:29 -- nvmf/common.sh@47 -- # : 0 00:13:44.213 14:54:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:44.213 14:54:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:44.213 14:54:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.213 14:54:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.213 14:54:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.213 14:54:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:44.213 14:54:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:44.213 14:54:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:44.213 14:54:29 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:44.213 14:54:29 -- target/ns_masking.sh@11 -- # loops=5 00:13:44.213 14:54:29 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:44.213 14:54:29 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:13:44.213 14:54:29 -- target/ns_masking.sh@15 -- # uuidgen 00:13:44.213 14:54:29 -- target/ns_masking.sh@15 -- # HOSTID=68041c9a-bc82-42c5-9d7a-8aea03f73452 00:13:44.213 14:54:29 -- target/ns_masking.sh@44 -- # nvmftestinit 00:13:44.213 14:54:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:44.213 14:54:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.213 14:54:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:44.213 14:54:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:44.213 14:54:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:44.213 14:54:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.213 14:54:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.213 14:54:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.213 14:54:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:44.213 14:54:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:44.213 14:54:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:44.213 14:54:29 -- common/autotest_common.sh@10 -- # set +x 00:13:46.116 14:54:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:46.116 14:54:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:46.116 14:54:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:46.116 14:54:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:46.116 14:54:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:46.116 14:54:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:46.116 14:54:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:46.116 14:54:31 -- nvmf/common.sh@295 -- # net_devs=() 00:13:46.116 14:54:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:46.116 14:54:31 -- nvmf/common.sh@296 -- # e810=() 00:13:46.116 14:54:31 -- nvmf/common.sh@296 -- # local -ga e810 00:13:46.116 14:54:31 -- nvmf/common.sh@297 -- # x722=() 00:13:46.116 14:54:31 -- nvmf/common.sh@297 -- # local -ga x722 00:13:46.116 14:54:31 -- nvmf/common.sh@298 -- # mlx=() 00:13:46.116 14:54:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:46.116 14:54:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:46.116 14:54:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:46.116 14:54:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:46.116 14:54:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:46.116 14:54:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:46.116 14:54:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:46.116 14:54:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:46.116 14:54:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:46.116 14:54:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:46.116 14:54:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:46.116 14:54:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:46.116 14:54:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:46.116 14:54:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:46.116 14:54:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:46.116 14:54:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:46.116 14:54:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:46.116 14:54:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:46.116 14:54:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:46.116 14:54:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:46.116 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:46.116 14:54:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:46.116 14:54:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:46.116 14:54:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.116 14:54:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.116 14:54:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:46.116 14:54:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:46.116 14:54:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:46.116 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:46.116 14:54:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:46.116 14:54:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:46.116 14:54:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.116 14:54:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.116 14:54:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:46.116 14:54:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:46.116 14:54:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:46.116 14:54:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:46.116 14:54:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:46.116 14:54:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.116 14:54:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:46.116 14:54:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.116 14:54:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:46.116 Found net devices under 0000:84:00.0: cvl_0_0 00:13:46.116 14:54:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.116 14:54:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:46.116 14:54:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.116 14:54:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:46.116 14:54:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.116 14:54:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:46.116 Found net devices under 0000:84:00.1: cvl_0_1 00:13:46.116 14:54:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.116 14:54:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:46.116 14:54:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:46.116 14:54:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:46.116 14:54:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:46.116 14:54:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:46.116 14:54:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.116 14:54:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:46.116 14:54:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:46.116 14:54:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:46.116 14:54:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:46.116 14:54:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:46.116 14:54:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:46.116 14:54:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:46.116 14:54:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.116 14:54:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:46.116 14:54:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:46.116 14:54:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:46.116 14:54:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:46.116 14:54:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:46.116 14:54:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:46.116 14:54:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:46.116 14:54:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:46.116 14:54:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:46.375 14:54:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:46.376 14:54:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:46.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:13:46.376 00:13:46.376 --- 10.0.0.2 ping statistics --- 00:13:46.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.376 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:13:46.376 14:54:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:46.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:13:46.376 00:13:46.376 --- 10.0.0.1 ping statistics --- 00:13:46.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.376 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:13:46.376 14:54:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.376 14:54:31 -- nvmf/common.sh@411 -- # return 0 00:13:46.376 14:54:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:46.376 14:54:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.376 14:54:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:46.376 14:54:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:46.376 14:54:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.376 14:54:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:46.376 14:54:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:46.376 14:54:31 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:13:46.376 14:54:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:46.376 14:54:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:46.376 14:54:31 -- common/autotest_common.sh@10 -- # set +x 00:13:46.376 14:54:31 -- nvmf/common.sh@470 -- # nvmfpid=3736928 00:13:46.376 14:54:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:46.376 14:54:31 -- nvmf/common.sh@471 -- # waitforlisten 3736928 00:13:46.376 14:54:31 -- common/autotest_common.sh@817 -- # '[' -z 3736928 ']' 00:13:46.376 14:54:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.376 14:54:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:46.376 14:54:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.376 14:54:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:46.376 14:54:31 -- common/autotest_common.sh@10 -- # set +x 00:13:46.376 [2024-04-26 14:54:31.936601] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:13:46.376 [2024-04-26 14:54:31.936672] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.376 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.376 [2024-04-26 14:54:31.974845] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:46.376 [2024-04-26 14:54:32.007056] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.376 [2024-04-26 14:54:32.101389] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.376 [2024-04-26 14:54:32.101468] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.376 [2024-04-26 14:54:32.101485] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.376 [2024-04-26 14:54:32.101499] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.376 [2024-04-26 14:54:32.101511] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.376 [2024-04-26 14:54:32.101605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.376 [2024-04-26 14:54:32.101669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.376 [2024-04-26 14:54:32.101741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.376 [2024-04-26 14:54:32.101744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.634 14:54:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:46.634 14:54:32 -- common/autotest_common.sh@850 -- # return 0 00:13:46.634 14:54:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:46.634 14:54:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:46.634 14:54:32 -- common/autotest_common.sh@10 -- # set +x 00:13:46.634 14:54:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.634 14:54:32 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:46.891 [2024-04-26 14:54:32.481712] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.891 14:54:32 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:13:46.891 14:54:32 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:13:46.891 14:54:32 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:47.150 Malloc1 00:13:47.150 14:54:32 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:47.408 Malloc2 00:13:47.408 14:54:33 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:47.665 14:54:33 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:47.921 14:54:33 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.179 [2024-04-26 14:54:33.774381] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.179 14:54:33 -- target/ns_masking.sh@61 -- # connect 00:13:48.179 14:54:33 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 68041c9a-bc82-42c5-9d7a-8aea03f73452 -a 10.0.0.2 -s 4420 -i 4 00:13:48.436 14:54:33 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:13:48.436 14:54:33 -- common/autotest_common.sh@1184 -- # local i=0 00:13:48.436 14:54:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:48.436 14:54:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:48.436 14:54:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:50.335 14:54:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:50.335 14:54:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:50.335 14:54:35 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:50.335 14:54:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:50.335 14:54:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:50.335 14:54:35 -- common/autotest_common.sh@1194 -- # return 0 00:13:50.335 14:54:35 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:50.335 14:54:35 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:50.335 14:54:36 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:50.335 14:54:36 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:50.335 14:54:36 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:13:50.335 14:54:36 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:50.335 14:54:36 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:50.335 [ 0]:0x1 00:13:50.335 14:54:36 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:50.335 14:54:36 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:50.335 14:54:36 -- target/ns_masking.sh@40 -- # nguid=dcd6ad0e041b4c1fa5489bfe1ab36f1c 00:13:50.335 14:54:36 -- target/ns_masking.sh@41 -- # [[ dcd6ad0e041b4c1fa5489bfe1ab36f1c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:50.335 14:54:36 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:50.593 14:54:36 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:13:50.593 14:54:36 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:50.593 14:54:36 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:50.593 [ 0]:0x1 00:13:50.593 14:54:36 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:50.593 14:54:36 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:50.851 14:54:36 -- target/ns_masking.sh@40 -- # nguid=dcd6ad0e041b4c1fa5489bfe1ab36f1c 00:13:50.851 14:54:36 -- target/ns_masking.sh@41 -- # [[ dcd6ad0e041b4c1fa5489bfe1ab36f1c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:50.851 14:54:36 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:13:50.851 14:54:36 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:50.851 14:54:36 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:50.851 [ 1]:0x2 00:13:50.851 14:54:36 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:50.851 14:54:36 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:50.851 14:54:36 -- target/ns_masking.sh@40 -- # nguid=ff60324ef4e54aae9f152a5e845fe5bf 00:13:50.851 14:54:36 -- target/ns_masking.sh@41 -- # [[ ff60324ef4e54aae9f152a5e845fe5bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:50.851 14:54:36 -- target/ns_masking.sh@69 -- # disconnect 00:13:50.851 14:54:36 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:50.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.851 14:54:36 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.417 14:54:36 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:51.417 14:54:37 -- target/ns_masking.sh@77 -- # connect 1 00:13:51.417 14:54:37 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 68041c9a-bc82-42c5-9d7a-8aea03f73452 -a 10.0.0.2 -s 4420 -i 4 00:13:51.675 14:54:37 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:51.675 14:54:37 -- common/autotest_common.sh@1184 -- # local i=0 00:13:51.675 14:54:37 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:51.675 14:54:37 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:13:51.675 14:54:37 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:13:51.675 14:54:37 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:53.571 14:54:39 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:53.571 14:54:39 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:53.571 14:54:39 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:53.571 14:54:39 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:53.571 14:54:39 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:53.571 14:54:39 -- common/autotest_common.sh@1194 -- # return 0 00:13:53.571 14:54:39 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:53.571 14:54:39 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:53.571 14:54:39 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:53.571 14:54:39 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:53.571 14:54:39 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:13:53.571 14:54:39 -- common/autotest_common.sh@638 -- # local es=0 00:13:53.571 14:54:39 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:13:53.571 14:54:39 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:13:53.571 14:54:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:53.571 14:54:39 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:13:53.571 14:54:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:53.571 14:54:39 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:13:53.571 14:54:39 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:53.571 14:54:39 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:53.571 14:54:39 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:53.571 14:54:39 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:53.571 14:54:39 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:53.571 14:54:39 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:53.571 14:54:39 -- common/autotest_common.sh@641 -- # es=1 00:13:53.571 14:54:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:53.571 14:54:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:53.571 14:54:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:53.571 14:54:39 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:13:53.571 14:54:39 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:53.571 14:54:39 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:53.571 [ 0]:0x2 00:13:53.571 14:54:39 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:53.571 14:54:39 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:53.828 14:54:39 -- target/ns_masking.sh@40 -- # nguid=ff60324ef4e54aae9f152a5e845fe5bf 00:13:53.828 14:54:39 -- target/ns_masking.sh@41 -- # [[ ff60324ef4e54aae9f152a5e845fe5bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:53.828 14:54:39 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:54.085 14:54:39 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:13:54.085 14:54:39 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:54.085 14:54:39 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:54.085 [ 0]:0x1 00:13:54.085 14:54:39 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:54.085 14:54:39 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:54.085 14:54:39 -- target/ns_masking.sh@40 -- # nguid=dcd6ad0e041b4c1fa5489bfe1ab36f1c 00:13:54.085 14:54:39 -- target/ns_masking.sh@41 -- # [[ dcd6ad0e041b4c1fa5489bfe1ab36f1c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.085 14:54:39 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:13:54.085 14:54:39 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:54.085 14:54:39 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:54.085 [ 1]:0x2 00:13:54.085 14:54:39 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:54.085 14:54:39 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:54.085 14:54:39 -- target/ns_masking.sh@40 -- # nguid=ff60324ef4e54aae9f152a5e845fe5bf 00:13:54.085 14:54:39 -- target/ns_masking.sh@41 -- # [[ ff60324ef4e54aae9f152a5e845fe5bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.085 14:54:39 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:54.342 14:54:39 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:13:54.342 14:54:39 -- common/autotest_common.sh@638 -- # local es=0 00:13:54.342 14:54:39 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:13:54.342 14:54:39 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:13:54.342 14:54:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:54.342 14:54:39 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:13:54.342 14:54:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:54.342 14:54:39 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:13:54.342 14:54:39 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:54.342 14:54:39 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:54.342 14:54:39 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:54.342 14:54:39 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:54.342 14:54:39 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:54.342 14:54:39 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.342 14:54:39 -- common/autotest_common.sh@641 -- # es=1 00:13:54.342 14:54:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:54.342 14:54:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:54.342 14:54:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:54.342 14:54:39 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:13:54.342 14:54:39 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:54.342 14:54:39 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:54.342 [ 0]:0x2 00:13:54.342 14:54:40 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:54.342 14:54:40 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:54.599 14:54:40 -- target/ns_masking.sh@40 -- # nguid=ff60324ef4e54aae9f152a5e845fe5bf 00:13:54.599 14:54:40 -- target/ns_masking.sh@41 -- # [[ ff60324ef4e54aae9f152a5e845fe5bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.599 14:54:40 -- target/ns_masking.sh@91 -- # disconnect 00:13:54.599 14:54:40 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:54.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.599 14:54:40 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:54.856 14:54:40 -- target/ns_masking.sh@95 -- # connect 2 00:13:54.856 14:54:40 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 68041c9a-bc82-42c5-9d7a-8aea03f73452 -a 10.0.0.2 -s 4420 -i 4 00:13:55.113 14:54:40 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:55.113 14:54:40 -- common/autotest_common.sh@1184 -- # local i=0 00:13:55.113 14:54:40 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:55.113 14:54:40 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:13:55.113 14:54:40 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:13:55.113 14:54:40 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:57.008 14:54:42 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:57.008 14:54:42 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:57.008 14:54:42 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:57.008 14:54:42 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:13:57.008 14:54:42 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:57.008 14:54:42 -- common/autotest_common.sh@1194 -- # return 0 00:13:57.008 14:54:42 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:57.008 14:54:42 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:57.008 14:54:42 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:57.008 14:54:42 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:57.008 14:54:42 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:13:57.008 14:54:42 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:57.008 14:54:42 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:57.008 [ 0]:0x1 00:13:57.008 14:54:42 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:57.008 14:54:42 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:57.008 14:54:42 -- target/ns_masking.sh@40 -- # nguid=dcd6ad0e041b4c1fa5489bfe1ab36f1c 00:13:57.008 14:54:42 -- target/ns_masking.sh@41 -- # [[ dcd6ad0e041b4c1fa5489bfe1ab36f1c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.008 14:54:42 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:13:57.008 14:54:42 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:57.008 14:54:42 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:57.008 [ 1]:0x2 00:13:57.008 14:54:42 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:57.008 14:54:42 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:57.265 14:54:42 -- target/ns_masking.sh@40 -- # nguid=ff60324ef4e54aae9f152a5e845fe5bf 00:13:57.265 14:54:42 -- target/ns_masking.sh@41 -- # [[ ff60324ef4e54aae9f152a5e845fe5bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.265 14:54:42 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:57.523 14:54:43 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:13:57.523 14:54:43 -- common/autotest_common.sh@638 -- # local es=0 00:13:57.523 14:54:43 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:13:57.523 14:54:43 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:13:57.523 14:54:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:57.523 14:54:43 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:13:57.523 14:54:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:57.523 14:54:43 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:13:57.523 14:54:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:57.523 14:54:43 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:57.523 14:54:43 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:57.523 14:54:43 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:57.523 14:54:43 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:57.523 14:54:43 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.523 14:54:43 -- common/autotest_common.sh@641 -- # es=1 00:13:57.523 14:54:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:57.523 14:54:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:57.523 14:54:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:57.523 14:54:43 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:13:57.523 14:54:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:57.523 14:54:43 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:57.523 [ 0]:0x2 00:13:57.523 14:54:43 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:57.523 14:54:43 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:57.523 14:54:43 -- target/ns_masking.sh@40 -- # nguid=ff60324ef4e54aae9f152a5e845fe5bf 00:13:57.523 14:54:43 -- target/ns_masking.sh@41 -- # [[ ff60324ef4e54aae9f152a5e845fe5bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.523 14:54:43 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:57.523 14:54:43 -- common/autotest_common.sh@638 -- # local es=0 00:13:57.523 14:54:43 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:57.523 14:54:43 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.523 14:54:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:57.523 14:54:43 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.523 14:54:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:57.523 14:54:43 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.523 14:54:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:57.523 14:54:43 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.523 14:54:43 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:57.523 14:54:43 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:57.780 [2024-04-26 14:54:43.309033] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:57.780 request: 00:13:57.780 { 00:13:57.780 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:57.780 "nsid": 2, 00:13:57.780 "host": "nqn.2016-06.io.spdk:host1", 00:13:57.780 "method": "nvmf_ns_remove_host", 00:13:57.780 "req_id": 1 00:13:57.780 } 00:13:57.780 Got JSON-RPC error response 00:13:57.780 response: 00:13:57.780 { 00:13:57.780 "code": -32602, 00:13:57.780 "message": "Invalid parameters" 00:13:57.780 } 00:13:57.780 14:54:43 -- common/autotest_common.sh@641 -- # es=1 00:13:57.780 14:54:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:57.780 14:54:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:57.780 14:54:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:57.780 14:54:43 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:13:57.780 14:54:43 -- common/autotest_common.sh@638 -- # local es=0 00:13:57.780 14:54:43 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:13:57.780 14:54:43 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:13:57.780 14:54:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:57.780 14:54:43 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:13:57.780 14:54:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:57.780 14:54:43 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:13:57.780 14:54:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:57.780 14:54:43 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:57.780 14:54:43 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:57.780 14:54:43 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:57.780 14:54:43 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:57.780 14:54:43 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.780 14:54:43 -- common/autotest_common.sh@641 -- # es=1 00:13:57.780 14:54:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:57.780 14:54:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:57.780 14:54:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:57.780 14:54:43 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:13:57.780 14:54:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:57.780 14:54:43 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:57.780 [ 0]:0x2 00:13:57.780 14:54:43 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:57.780 14:54:43 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:57.780 14:54:43 -- target/ns_masking.sh@40 -- # nguid=ff60324ef4e54aae9f152a5e845fe5bf 00:13:57.780 14:54:43 -- target/ns_masking.sh@41 -- # [[ ff60324ef4e54aae9f152a5e845fe5bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.780 14:54:43 -- target/ns_masking.sh@108 -- # disconnect 00:13:57.780 14:54:43 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.038 14:54:43 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.296 14:54:43 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:13:58.296 14:54:43 -- target/ns_masking.sh@114 -- # nvmftestfini 00:13:58.296 14:54:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:58.296 14:54:43 -- nvmf/common.sh@117 -- # sync 00:13:58.296 14:54:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:58.296 14:54:43 -- nvmf/common.sh@120 -- # set +e 00:13:58.296 14:54:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:58.296 14:54:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:58.296 rmmod nvme_tcp 00:13:58.296 rmmod nvme_fabrics 00:13:58.296 rmmod nvme_keyring 00:13:58.296 14:54:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:58.296 14:54:43 -- nvmf/common.sh@124 -- # set -e 00:13:58.296 14:54:43 -- nvmf/common.sh@125 -- # return 0 00:13:58.296 14:54:43 -- nvmf/common.sh@478 -- # '[' -n 3736928 ']' 00:13:58.296 14:54:43 -- nvmf/common.sh@479 -- # killprocess 3736928 00:13:58.296 14:54:43 -- common/autotest_common.sh@936 -- # '[' -z 3736928 ']' 00:13:58.296 14:54:43 -- common/autotest_common.sh@940 -- # kill -0 3736928 00:13:58.296 14:54:43 -- common/autotest_common.sh@941 -- # uname 00:13:58.296 14:54:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:58.296 14:54:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3736928 00:13:58.296 14:54:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:58.296 14:54:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:58.296 14:54:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3736928' 00:13:58.296 killing process with pid 3736928 00:13:58.296 14:54:43 -- common/autotest_common.sh@955 -- # kill 3736928 00:13:58.296 14:54:43 -- common/autotest_common.sh@960 -- # wait 3736928 00:13:58.578 14:54:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:58.578 14:54:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:58.578 14:54:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:58.578 14:54:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:58.578 14:54:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:58.578 14:54:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.578 14:54:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.578 14:54:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.484 14:54:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:00.484 00:14:00.484 real 0m16.549s 00:14:00.484 user 0m51.411s 00:14:00.484 sys 0m3.658s 00:14:00.484 14:54:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:00.484 14:54:46 -- common/autotest_common.sh@10 -- # set +x 00:14:00.484 ************************************ 00:14:00.484 END TEST nvmf_ns_masking 00:14:00.484 ************************************ 00:14:00.742 14:54:46 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:00.743 14:54:46 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:00.743 14:54:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:00.743 14:54:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:00.743 14:54:46 -- common/autotest_common.sh@10 -- # set +x 00:14:00.743 ************************************ 00:14:00.743 START TEST nvmf_nvme_cli 00:14:00.743 ************************************ 00:14:00.743 14:54:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:00.743 * Looking for test storage... 00:14:00.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.743 14:54:46 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.743 14:54:46 -- nvmf/common.sh@7 -- # uname -s 00:14:00.743 14:54:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.743 14:54:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.743 14:54:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.743 14:54:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.743 14:54:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.743 14:54:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.743 14:54:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.743 14:54:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.743 14:54:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.743 14:54:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.743 14:54:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:00.743 14:54:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:00.743 14:54:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.743 14:54:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.743 14:54:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.743 14:54:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.743 14:54:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.743 14:54:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.743 14:54:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.743 14:54:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.743 14:54:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.743 14:54:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.743 14:54:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.743 14:54:46 -- paths/export.sh@5 -- # export PATH 00:14:00.743 14:54:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.743 14:54:46 -- nvmf/common.sh@47 -- # : 0 00:14:00.743 14:54:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.743 14:54:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.743 14:54:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.743 14:54:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.743 14:54:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.743 14:54:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.743 14:54:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.743 14:54:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.743 14:54:46 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:00.743 14:54:46 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:00.743 14:54:46 -- target/nvme_cli.sh@14 -- # devs=() 00:14:00.743 14:54:46 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:00.743 14:54:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:00.743 14:54:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.743 14:54:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:00.743 14:54:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:00.743 14:54:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:00.743 14:54:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.743 14:54:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.743 14:54:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.743 14:54:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:00.743 14:54:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:00.743 14:54:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:00.743 14:54:46 -- common/autotest_common.sh@10 -- # set +x 00:14:03.276 14:54:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:03.276 14:54:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:03.276 14:54:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:03.276 14:54:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:03.276 14:54:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:03.276 14:54:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:03.276 14:54:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:03.276 14:54:48 -- nvmf/common.sh@295 -- # net_devs=() 00:14:03.276 14:54:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:03.276 14:54:48 -- nvmf/common.sh@296 -- # e810=() 00:14:03.276 14:54:48 -- nvmf/common.sh@296 -- # local -ga e810 00:14:03.276 14:54:48 -- nvmf/common.sh@297 -- # x722=() 00:14:03.276 14:54:48 -- nvmf/common.sh@297 -- # local -ga x722 00:14:03.276 14:54:48 -- nvmf/common.sh@298 -- # mlx=() 00:14:03.276 14:54:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:03.276 14:54:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:03.276 14:54:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:03.276 14:54:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:03.276 14:54:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:03.276 14:54:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:03.276 14:54:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:03.276 14:54:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:03.276 14:54:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:03.276 14:54:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:03.276 14:54:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:03.277 14:54:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:03.277 14:54:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:03.277 14:54:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:03.277 14:54:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:03.277 14:54:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:03.277 14:54:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:03.277 14:54:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:03.277 14:54:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.277 14:54:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:03.277 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:03.277 14:54:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.277 14:54:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.277 14:54:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.277 14:54:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.277 14:54:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.277 14:54:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.277 14:54:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:03.277 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:03.277 14:54:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.277 14:54:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.277 14:54:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.277 14:54:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.277 14:54:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.277 14:54:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:03.277 14:54:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:03.277 14:54:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:03.277 14:54:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.277 14:54:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.277 14:54:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:03.277 14:54:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.277 14:54:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:03.277 Found net devices under 0000:84:00.0: cvl_0_0 00:14:03.277 14:54:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.277 14:54:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.277 14:54:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.277 14:54:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:03.277 14:54:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.277 14:54:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:03.277 Found net devices under 0000:84:00.1: cvl_0_1 00:14:03.277 14:54:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.277 14:54:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:03.277 14:54:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:03.277 14:54:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:03.277 14:54:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:03.277 14:54:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:03.277 14:54:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.277 14:54:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.277 14:54:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:03.277 14:54:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:03.277 14:54:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:03.277 14:54:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:03.277 14:54:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:03.277 14:54:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:03.277 14:54:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.277 14:54:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:03.277 14:54:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:03.277 14:54:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:03.277 14:54:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:03.277 14:54:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:03.277 14:54:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:03.277 14:54:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:03.277 14:54:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:03.277 14:54:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:03.277 14:54:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:03.277 14:54:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:03.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:14:03.277 00:14:03.277 --- 10.0.0.2 ping statistics --- 00:14:03.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.277 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:14:03.277 14:54:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:03.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:14:03.277 00:14:03.277 --- 10.0.0.1 ping statistics --- 00:14:03.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.277 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:14:03.277 14:54:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.277 14:54:48 -- nvmf/common.sh@411 -- # return 0 00:14:03.277 14:54:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:03.277 14:54:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.277 14:54:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:03.277 14:54:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:03.277 14:54:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.277 14:54:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:03.277 14:54:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:03.277 14:54:48 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:03.277 14:54:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:03.277 14:54:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:03.277 14:54:48 -- common/autotest_common.sh@10 -- # set +x 00:14:03.277 14:54:48 -- nvmf/common.sh@470 -- # nvmfpid=3740509 00:14:03.277 14:54:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:03.277 14:54:48 -- nvmf/common.sh@471 -- # waitforlisten 3740509 00:14:03.277 14:54:48 -- common/autotest_common.sh@817 -- # '[' -z 3740509 ']' 00:14:03.277 14:54:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.277 14:54:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:03.277 14:54:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.277 14:54:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:03.277 14:54:48 -- common/autotest_common.sh@10 -- # set +x 00:14:03.277 [2024-04-26 14:54:48.735809] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:14:03.277 [2024-04-26 14:54:48.735877] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.277 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.277 [2024-04-26 14:54:48.775452] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:03.277 [2024-04-26 14:54:48.826514] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.277 [2024-04-26 14:54:48.930222] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.277 [2024-04-26 14:54:48.930302] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.277 [2024-04-26 14:54:48.930335] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.277 [2024-04-26 14:54:48.930361] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.277 [2024-04-26 14:54:48.930384] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.277 [2024-04-26 14:54:48.930470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.277 [2024-04-26 14:54:48.930540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.277 [2024-04-26 14:54:48.930610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.277 [2024-04-26 14:54:48.930600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.536 14:54:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:03.536 14:54:49 -- common/autotest_common.sh@850 -- # return 0 00:14:03.536 14:54:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:03.536 14:54:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:03.536 14:54:49 -- common/autotest_common.sh@10 -- # set +x 00:14:03.536 14:54:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.536 14:54:49 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.536 14:54:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.536 14:54:49 -- common/autotest_common.sh@10 -- # set +x 00:14:03.536 [2024-04-26 14:54:49.126985] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.536 14:54:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.536 14:54:49 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:03.536 14:54:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.536 14:54:49 -- common/autotest_common.sh@10 -- # set +x 00:14:03.536 Malloc0 00:14:03.536 14:54:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.536 14:54:49 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:03.536 14:54:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.536 14:54:49 -- common/autotest_common.sh@10 -- # set +x 00:14:03.536 Malloc1 00:14:03.536 14:54:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.536 14:54:49 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:03.536 14:54:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.536 14:54:49 -- common/autotest_common.sh@10 -- # set +x 00:14:03.536 14:54:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.536 14:54:49 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:03.536 14:54:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.536 14:54:49 -- common/autotest_common.sh@10 -- # set +x 00:14:03.536 14:54:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.536 14:54:49 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:03.536 14:54:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.536 14:54:49 -- common/autotest_common.sh@10 -- # set +x 00:14:03.536 14:54:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.536 14:54:49 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.536 14:54:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.536 14:54:49 -- common/autotest_common.sh@10 -- # set +x 00:14:03.536 [2024-04-26 14:54:49.209376] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.536 14:54:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.536 14:54:49 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:03.536 14:54:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.536 14:54:49 -- common/autotest_common.sh@10 -- # set +x 00:14:03.536 14:54:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.536 14:54:49 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:14:03.801 00:14:03.801 Discovery Log Number of Records 2, Generation counter 2 00:14:03.801 =====Discovery Log Entry 0====== 00:14:03.801 trtype: tcp 00:14:03.801 adrfam: ipv4 00:14:03.801 subtype: current discovery subsystem 00:14:03.801 treq: not required 00:14:03.801 portid: 0 00:14:03.801 trsvcid: 4420 00:14:03.801 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:03.801 traddr: 10.0.0.2 00:14:03.801 eflags: explicit discovery connections, duplicate discovery information 00:14:03.801 sectype: none 00:14:03.801 =====Discovery Log Entry 1====== 00:14:03.801 trtype: tcp 00:14:03.801 adrfam: ipv4 00:14:03.801 subtype: nvme subsystem 00:14:03.801 treq: not required 00:14:03.801 portid: 0 00:14:03.801 trsvcid: 4420 00:14:03.801 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:03.801 traddr: 10.0.0.2 00:14:03.801 eflags: none 00:14:03.801 sectype: none 00:14:03.801 14:54:49 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:03.801 14:54:49 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:03.801 14:54:49 -- nvmf/common.sh@511 -- # local dev _ 00:14:03.801 14:54:49 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:03.801 14:54:49 -- nvmf/common.sh@510 -- # nvme list 00:14:03.801 14:54:49 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:14:03.801 14:54:49 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:03.801 14:54:49 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:14:03.801 14:54:49 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:03.801 14:54:49 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:03.801 14:54:49 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:04.366 14:54:49 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:04.366 14:54:49 -- common/autotest_common.sh@1184 -- # local i=0 00:14:04.366 14:54:49 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.366 14:54:49 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:14:04.366 14:54:49 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:14:04.366 14:54:49 -- common/autotest_common.sh@1191 -- # sleep 2 00:14:06.264 14:54:51 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:14:06.264 14:54:51 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:14:06.264 14:54:51 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.264 14:54:51 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:14:06.264 14:54:51 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.264 14:54:51 -- common/autotest_common.sh@1194 -- # return 0 00:14:06.264 14:54:51 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:06.264 14:54:51 -- nvmf/common.sh@511 -- # local dev _ 00:14:06.264 14:54:51 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:06.264 14:54:51 -- nvmf/common.sh@510 -- # nvme list 00:14:06.521 14:54:52 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:14:06.521 14:54:52 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:06.522 14:54:52 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:14:06.522 14:54:52 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:06.522 14:54:52 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:06.522 14:54:52 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:14:06.522 14:54:52 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:06.522 14:54:52 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:06.522 14:54:52 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:14:06.522 14:54:52 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:06.522 14:54:52 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:06.522 /dev/nvme0n1 ]] 00:14:06.522 14:54:52 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:06.522 14:54:52 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:06.522 14:54:52 -- nvmf/common.sh@511 -- # local dev _ 00:14:06.522 14:54:52 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:06.522 14:54:52 -- nvmf/common.sh@510 -- # nvme list 00:14:06.779 14:54:52 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:14:06.779 14:54:52 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:06.779 14:54:52 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:14:06.779 14:54:52 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:06.779 14:54:52 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:06.780 14:54:52 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:14:06.780 14:54:52 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:06.780 14:54:52 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:06.780 14:54:52 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:14:06.780 14:54:52 -- nvmf/common.sh@513 -- # read -r dev _ 00:14:06.780 14:54:52 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:06.780 14:54:52 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:07.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.038 14:54:52 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:07.038 14:54:52 -- common/autotest_common.sh@1205 -- # local i=0 00:14:07.038 14:54:52 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:14:07.038 14:54:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.038 14:54:52 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:14:07.038 14:54:52 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.038 14:54:52 -- common/autotest_common.sh@1217 -- # return 0 00:14:07.038 14:54:52 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:07.038 14:54:52 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.038 14:54:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.038 14:54:52 -- common/autotest_common.sh@10 -- # set +x 00:14:07.038 14:54:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.038 14:54:52 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:07.038 14:54:52 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:07.038 14:54:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:07.038 14:54:52 -- nvmf/common.sh@117 -- # sync 00:14:07.038 14:54:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:07.038 14:54:52 -- nvmf/common.sh@120 -- # set +e 00:14:07.038 14:54:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:07.038 14:54:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:07.038 rmmod nvme_tcp 00:14:07.038 rmmod nvme_fabrics 00:14:07.038 rmmod nvme_keyring 00:14:07.038 14:54:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:07.038 14:54:52 -- nvmf/common.sh@124 -- # set -e 00:14:07.038 14:54:52 -- nvmf/common.sh@125 -- # return 0 00:14:07.038 14:54:52 -- nvmf/common.sh@478 -- # '[' -n 3740509 ']' 00:14:07.038 14:54:52 -- nvmf/common.sh@479 -- # killprocess 3740509 00:14:07.038 14:54:52 -- common/autotest_common.sh@936 -- # '[' -z 3740509 ']' 00:14:07.038 14:54:52 -- common/autotest_common.sh@940 -- # kill -0 3740509 00:14:07.038 14:54:52 -- common/autotest_common.sh@941 -- # uname 00:14:07.038 14:54:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:07.038 14:54:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3740509 00:14:07.038 14:54:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:07.038 14:54:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:07.038 14:54:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3740509' 00:14:07.038 killing process with pid 3740509 00:14:07.038 14:54:52 -- common/autotest_common.sh@955 -- # kill 3740509 00:14:07.038 14:54:52 -- common/autotest_common.sh@960 -- # wait 3740509 00:14:07.296 14:54:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:07.296 14:54:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:07.296 14:54:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:07.296 14:54:52 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.296 14:54:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:07.296 14:54:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.296 14:54:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.296 14:54:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.825 14:54:54 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:09.825 00:14:09.825 real 0m8.636s 00:14:09.825 user 0m16.497s 00:14:09.825 sys 0m2.295s 00:14:09.825 14:54:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:09.825 14:54:54 -- common/autotest_common.sh@10 -- # set +x 00:14:09.825 ************************************ 00:14:09.825 END TEST nvmf_nvme_cli 00:14:09.826 ************************************ 00:14:09.826 14:54:54 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:09.826 14:54:54 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:09.826 14:54:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:09.826 14:54:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:09.826 14:54:54 -- common/autotest_common.sh@10 -- # set +x 00:14:09.826 ************************************ 00:14:09.826 START TEST nvmf_vfio_user 00:14:09.826 ************************************ 00:14:09.826 14:54:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:09.826 * Looking for test storage... 00:14:09.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.826 14:54:55 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.826 14:54:55 -- nvmf/common.sh@7 -- # uname -s 00:14:09.826 14:54:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.826 14:54:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.826 14:54:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.826 14:54:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.826 14:54:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.826 14:54:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.826 14:54:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.826 14:54:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.826 14:54:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.826 14:54:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.826 14:54:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:09.826 14:54:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:09.826 14:54:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.826 14:54:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.826 14:54:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.826 14:54:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.826 14:54:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.826 14:54:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.826 14:54:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.826 14:54:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.826 14:54:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.826 14:54:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.826 14:54:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.826 14:54:55 -- paths/export.sh@5 -- # export PATH 00:14:09.826 14:54:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.826 14:54:55 -- nvmf/common.sh@47 -- # : 0 00:14:09.826 14:54:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:09.826 14:54:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:09.826 14:54:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.826 14:54:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.826 14:54:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.826 14:54:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:09.826 14:54:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:09.826 14:54:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:09.826 14:54:55 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:09.826 14:54:55 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:09.826 14:54:55 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:09.826 14:54:55 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.826 14:54:55 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:09.826 14:54:55 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:09.826 14:54:55 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:09.826 14:54:55 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:09.826 14:54:55 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:09.826 14:54:55 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:09.826 14:54:55 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3741435 00:14:09.826 14:54:55 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:09.826 14:54:55 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3741435' 00:14:09.826 Process pid: 3741435 00:14:09.826 14:54:55 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:09.826 14:54:55 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3741435 00:14:09.826 14:54:55 -- common/autotest_common.sh@817 -- # '[' -z 3741435 ']' 00:14:09.826 14:54:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.826 14:54:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:09.826 14:54:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.826 14:54:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:09.826 14:54:55 -- common/autotest_common.sh@10 -- # set +x 00:14:09.826 [2024-04-26 14:54:55.214339] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:14:09.826 [2024-04-26 14:54:55.214425] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.826 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.826 [2024-04-26 14:54:55.247336] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:09.826 [2024-04-26 14:54:55.274706] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:09.826 [2024-04-26 14:54:55.357842] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.826 [2024-04-26 14:54:55.357901] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.826 [2024-04-26 14:54:55.357929] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.826 [2024-04-26 14:54:55.357942] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.826 [2024-04-26 14:54:55.357952] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.826 [2024-04-26 14:54:55.358012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.826 [2024-04-26 14:54:55.358138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.826 [2024-04-26 14:54:55.358163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.826 [2024-04-26 14:54:55.358166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.826 14:54:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:09.826 14:54:55 -- common/autotest_common.sh@850 -- # return 0 00:14:09.826 14:54:55 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:10.757 14:54:56 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:11.015 14:54:56 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:11.015 14:54:56 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:11.015 14:54:56 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:11.015 14:54:56 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:11.015 14:54:56 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:11.581 Malloc1 00:14:11.581 14:54:57 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:11.839 14:54:57 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:12.096 14:54:57 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:12.353 14:54:57 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:12.353 14:54:57 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:12.353 14:54:57 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:12.611 Malloc2 00:14:12.611 14:54:58 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:12.868 14:54:58 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:13.127 14:54:58 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:13.388 14:54:58 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:13.388 14:54:58 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:13.388 14:54:58 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:13.388 14:54:58 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:13.388 14:54:58 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:13.388 14:54:58 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:13.388 [2024-04-26 14:54:58.947392] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:14:13.388 [2024-04-26 14:54:58.947437] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3741859 ] 00:14:13.388 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.388 [2024-04-26 14:54:58.964665] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:13.388 [2024-04-26 14:54:58.982978] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:13.388 [2024-04-26 14:54:58.985445] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:13.388 [2024-04-26 14:54:58.985472] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1ae7532000 00:14:13.388 [2024-04-26 14:54:58.986443] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.388 [2024-04-26 14:54:58.987432] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.388 [2024-04-26 14:54:58.988436] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.388 [2024-04-26 14:54:58.989440] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:13.388 [2024-04-26 14:54:58.990441] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:13.388 [2024-04-26 14:54:58.991448] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.388 [2024-04-26 14:54:58.992452] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:13.388 [2024-04-26 14:54:58.993454] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.388 [2024-04-26 14:54:58.994465] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:13.388 [2024-04-26 14:54:58.994486] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1ae62e3000 00:14:13.388 [2024-04-26 14:54:58.995601] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:13.388 [2024-04-26 14:54:59.009685] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:13.388 [2024-04-26 14:54:59.009718] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:13.388 [2024-04-26 14:54:59.018612] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:13.388 [2024-04-26 14:54:59.018660] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:13.388 [2024-04-26 14:54:59.018749] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:13.388 [2024-04-26 14:54:59.018777] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:13.388 [2024-04-26 14:54:59.018788] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:13.388 [2024-04-26 14:54:59.019603] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:13.388 [2024-04-26 14:54:59.019621] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:13.388 [2024-04-26 14:54:59.019634] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:13.388 [2024-04-26 14:54:59.020603] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:13.388 [2024-04-26 14:54:59.020620] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:13.388 [2024-04-26 14:54:59.020634] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:13.388 [2024-04-26 14:54:59.021608] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:13.388 [2024-04-26 14:54:59.021626] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:13.388 [2024-04-26 14:54:59.022617] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:13.388 [2024-04-26 14:54:59.022635] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:13.388 [2024-04-26 14:54:59.022645] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:13.388 [2024-04-26 14:54:59.022656] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:13.388 [2024-04-26 14:54:59.022766] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:13.388 [2024-04-26 14:54:59.022774] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:13.388 [2024-04-26 14:54:59.022782] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:13.388 [2024-04-26 14:54:59.023623] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:13.388 [2024-04-26 14:54:59.024622] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:13.388 [2024-04-26 14:54:59.025631] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:13.388 [2024-04-26 14:54:59.026627] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:13.388 [2024-04-26 14:54:59.026749] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:13.388 [2024-04-26 14:54:59.027641] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:13.388 [2024-04-26 14:54:59.027659] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:13.388 [2024-04-26 14:54:59.027668] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:13.388 [2024-04-26 14:54:59.027691] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:13.388 [2024-04-26 14:54:59.027705] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:13.388 [2024-04-26 14:54:59.027729] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:13.388 [2024-04-26 14:54:59.027738] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:13.388 [2024-04-26 14:54:59.027755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:13.389 [2024-04-26 14:54:59.027825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:13.389 [2024-04-26 14:54:59.027840] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:13.389 [2024-04-26 14:54:59.027849] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:13.389 [2024-04-26 14:54:59.027856] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:13.389 [2024-04-26 14:54:59.027864] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:13.389 [2024-04-26 14:54:59.027872] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:13.389 [2024-04-26 14:54:59.027880] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:13.389 [2024-04-26 14:54:59.027887] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:13.389 [2024-04-26 14:54:59.027900] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:13.389 [2024-04-26 14:54:59.027914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:13.389 [2024-04-26 14:54:59.027931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:13.389 [2024-04-26 14:54:59.027950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.389 [2024-04-26 14:54:59.027964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.389 [2024-04-26 14:54:59.027975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.389 [2024-04-26 14:54:59.027987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.389 [2024-04-26 14:54:59.028013] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:13.389 [2024-04-26 14:54:59.028038] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:13.389 [2024-04-26 14:54:59.028054] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:13.389 [2024-04-26 14:54:59.028067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:13.389 [2024-04-26 14:54:59.028078] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:13.389 [2024-04-26 14:54:59.028087] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:13.389 [2024-04-26 14:54:59.028105] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:13.389 [2024-04-26 14:54:59.028116] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:13.389 [2024-04-26 14:54:59.028130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:13.389 [2024-04-26 14:54:59.028142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:13.389 [2024-04-26 14:54:59.028195] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:13.389 [2024-04-26 14:54:59.028210] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:13.389 [2024-04-26 14:54:59.028224] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:13.389 [2024-04-26 14:54:59.028232] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:13.389 [2024-04-26 14:54:59.028242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:13.389 [2024-04-26 14:54:59.028259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:13.389 [2024-04-26 14:54:59.028275] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:13.389 [2024-04-26 14:54:59.028295] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:13.389 [2024-04-26 14:54:59.028324] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:13.389 [2024-04-26 14:54:59.028337] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:13.389 [2024-04-26 14:54:59.028345] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:13.389 [2024-04-26 14:54:59.028355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:13.389 [2024-04-26 14:54:59.028393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:13.389 [2024-04-26 14:54:59.028414] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:13.389 [2024-04-26 14:54:59.028428] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:13.389 [2024-04-26 14:54:59.028443] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:13.389 [2024-04-26 14:54:59.028451] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:13.389 [2024-04-26 14:54:59.028461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:13.389 [2024-04-26 14:54:59.028472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:13.389 [2024-04-26 14:54:59.028485] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:13.389 [2024-04-26 14:54:59.028497] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:13.389 [2024-04-26 14:54:59.028510] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:13.389 [2024-04-26 14:54:59.028520] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:13.389 [2024-04-26 14:54:59.028528] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:13.389 [2024-04-26 14:54:59.028537] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:13.389 [2024-04-26 14:54:59.028544] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:13.389 [2024-04-26 14:54:59.028552] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:13.389 [2024-04-26 14:54:59.028576] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:13.389 [2024-04-26 14:54:59.028593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:13.389 [2024-04-26 14:54:59.028611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:13.389 [2024-04-26 14:54:59.028623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:13.389 [2024-04-26 14:54:59.028638] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:13.389 [2024-04-26 14:54:59.028652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:13.389 [2024-04-26 14:54:59.028668] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:13.389 [2024-04-26 14:54:59.028679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:13.389 [2024-04-26 14:54:59.028695] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:13.389 [2024-04-26 14:54:59.028704] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:13.389 [2024-04-26 14:54:59.028711] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:13.390 [2024-04-26 14:54:59.028717] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:13.390 [2024-04-26 14:54:59.028726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:13.390 [2024-04-26 14:54:59.028737] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:13.390 [2024-04-26 14:54:59.028748] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:13.390 [2024-04-26 14:54:59.028757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:13.390 [2024-04-26 14:54:59.028768] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:13.390 [2024-04-26 14:54:59.028776] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:13.390 [2024-04-26 14:54:59.028784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:13.390 [2024-04-26 14:54:59.028796] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:13.390 [2024-04-26 14:54:59.028804] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:13.390 [2024-04-26 14:54:59.028812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:13.390 [2024-04-26 14:54:59.028824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:13.390 [2024-04-26 14:54:59.028844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:13.390 [2024-04-26 14:54:59.028859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:13.390 [2024-04-26 14:54:59.028871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:13.390 ===================================================== 00:14:13.390 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:13.390 ===================================================== 00:14:13.390 Controller Capabilities/Features 00:14:13.390 ================================ 00:14:13.390 Vendor ID: 4e58 00:14:13.390 Subsystem Vendor ID: 4e58 00:14:13.390 Serial Number: SPDK1 00:14:13.390 Model Number: SPDK bdev Controller 00:14:13.390 Firmware Version: 24.05 00:14:13.390 Recommended Arb Burst: 6 00:14:13.390 IEEE OUI Identifier: 8d 6b 50 00:14:13.390 Multi-path I/O 00:14:13.390 May have multiple subsystem ports: Yes 00:14:13.390 May have multiple controllers: Yes 00:14:13.390 Associated with SR-IOV VF: No 00:14:13.390 Max Data Transfer Size: 131072 00:14:13.390 Max Number of Namespaces: 32 00:14:13.390 Max Number of I/O Queues: 127 00:14:13.390 NVMe Specification Version (VS): 1.3 00:14:13.390 NVMe Specification Version (Identify): 1.3 00:14:13.390 Maximum Queue Entries: 256 00:14:13.390 Contiguous Queues Required: Yes 00:14:13.390 Arbitration Mechanisms Supported 00:14:13.390 Weighted Round Robin: Not Supported 00:14:13.390 Vendor Specific: Not Supported 00:14:13.390 Reset Timeout: 15000 ms 00:14:13.390 Doorbell Stride: 4 bytes 00:14:13.390 NVM Subsystem Reset: Not Supported 00:14:13.390 Command Sets Supported 00:14:13.390 NVM Command Set: Supported 00:14:13.390 Boot Partition: Not Supported 00:14:13.390 Memory Page Size Minimum: 4096 bytes 00:14:13.390 Memory Page Size Maximum: 4096 bytes 00:14:13.390 Persistent Memory Region: Not Supported 00:14:13.390 Optional Asynchronous Events Supported 00:14:13.390 Namespace Attribute Notices: Supported 00:14:13.390 Firmware Activation Notices: Not Supported 00:14:13.390 ANA Change Notices: Not Supported 00:14:13.390 PLE Aggregate Log Change Notices: Not Supported 00:14:13.390 LBA Status Info Alert Notices: Not Supported 00:14:13.390 EGE Aggregate Log Change Notices: Not Supported 00:14:13.390 Normal NVM Subsystem Shutdown event: Not Supported 00:14:13.390 Zone Descriptor Change Notices: Not Supported 00:14:13.390 Discovery Log Change Notices: Not Supported 00:14:13.390 Controller Attributes 00:14:13.390 128-bit Host Identifier: Supported 00:14:13.390 Non-Operational Permissive Mode: Not Supported 00:14:13.390 NVM Sets: Not Supported 00:14:13.390 Read Recovery Levels: Not Supported 00:14:13.390 Endurance Groups: Not Supported 00:14:13.390 Predictable Latency Mode: Not Supported 00:14:13.390 Traffic Based Keep ALive: Not Supported 00:14:13.390 Namespace Granularity: Not Supported 00:14:13.390 SQ Associations: Not Supported 00:14:13.390 UUID List: Not Supported 00:14:13.390 Multi-Domain Subsystem: Not Supported 00:14:13.390 Fixed Capacity Management: Not Supported 00:14:13.390 Variable Capacity Management: Not Supported 00:14:13.390 Delete Endurance Group: Not Supported 00:14:13.390 Delete NVM Set: Not Supported 00:14:13.390 Extended LBA Formats Supported: Not Supported 00:14:13.390 Flexible Data Placement Supported: Not Supported 00:14:13.390 00:14:13.390 Controller Memory Buffer Support 00:14:13.390 ================================ 00:14:13.390 Supported: No 00:14:13.390 00:14:13.390 Persistent Memory Region Support 00:14:13.390 ================================ 00:14:13.390 Supported: No 00:14:13.390 00:14:13.390 Admin Command Set Attributes 00:14:13.390 ============================ 00:14:13.390 Security Send/Receive: Not Supported 00:14:13.390 Format NVM: Not Supported 00:14:13.390 Firmware Activate/Download: Not Supported 00:14:13.390 Namespace Management: Not Supported 00:14:13.390 Device Self-Test: Not Supported 00:14:13.390 Directives: Not Supported 00:14:13.390 NVMe-MI: Not Supported 00:14:13.390 Virtualization Management: Not Supported 00:14:13.390 Doorbell Buffer Config: Not Supported 00:14:13.390 Get LBA Status Capability: Not Supported 00:14:13.390 Command & Feature Lockdown Capability: Not Supported 00:14:13.390 Abort Command Limit: 4 00:14:13.390 Async Event Request Limit: 4 00:14:13.390 Number of Firmware Slots: N/A 00:14:13.390 Firmware Slot 1 Read-Only: N/A 00:14:13.390 Firmware Activation Without Reset: N/A 00:14:13.390 Multiple Update Detection Support: N/A 00:14:13.390 Firmware Update Granularity: No Information Provided 00:14:13.390 Per-Namespace SMART Log: No 00:14:13.390 Asymmetric Namespace Access Log Page: Not Supported 00:14:13.390 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:13.390 Command Effects Log Page: Supported 00:14:13.391 Get Log Page Extended Data: Supported 00:14:13.391 Telemetry Log Pages: Not Supported 00:14:13.391 Persistent Event Log Pages: Not Supported 00:14:13.391 Supported Log Pages Log Page: May Support 00:14:13.391 Commands Supported & Effects Log Page: Not Supported 00:14:13.391 Feature Identifiers & Effects Log Page:May Support 00:14:13.391 NVMe-MI Commands & Effects Log Page: May Support 00:14:13.391 Data Area 4 for Telemetry Log: Not Supported 00:14:13.391 Error Log Page Entries Supported: 128 00:14:13.391 Keep Alive: Supported 00:14:13.391 Keep Alive Granularity: 10000 ms 00:14:13.391 00:14:13.391 NVM Command Set Attributes 00:14:13.391 ========================== 00:14:13.391 Submission Queue Entry Size 00:14:13.391 Max: 64 00:14:13.391 Min: 64 00:14:13.391 Completion Queue Entry Size 00:14:13.391 Max: 16 00:14:13.391 Min: 16 00:14:13.391 Number of Namespaces: 32 00:14:13.391 Compare Command: Supported 00:14:13.391 Write Uncorrectable Command: Not Supported 00:14:13.391 Dataset Management Command: Supported 00:14:13.391 Write Zeroes Command: Supported 00:14:13.391 Set Features Save Field: Not Supported 00:14:13.391 Reservations: Not Supported 00:14:13.391 Timestamp: Not Supported 00:14:13.391 Copy: Supported 00:14:13.391 Volatile Write Cache: Present 00:14:13.391 Atomic Write Unit (Normal): 1 00:14:13.391 Atomic Write Unit (PFail): 1 00:14:13.391 Atomic Compare & Write Unit: 1 00:14:13.391 Fused Compare & Write: Supported 00:14:13.391 Scatter-Gather List 00:14:13.391 SGL Command Set: Supported (Dword aligned) 00:14:13.391 SGL Keyed: Not Supported 00:14:13.391 SGL Bit Bucket Descriptor: Not Supported 00:14:13.391 SGL Metadata Pointer: Not Supported 00:14:13.391 Oversized SGL: Not Supported 00:14:13.391 SGL Metadata Address: Not Supported 00:14:13.391 SGL Offset: Not Supported 00:14:13.391 Transport SGL Data Block: Not Supported 00:14:13.391 Replay Protected Memory Block: Not Supported 00:14:13.391 00:14:13.391 Firmware Slot Information 00:14:13.391 ========================= 00:14:13.391 Active slot: 1 00:14:13.391 Slot 1 Firmware Revision: 24.05 00:14:13.391 00:14:13.391 00:14:13.391 Commands Supported and Effects 00:14:13.391 ============================== 00:14:13.391 Admin Commands 00:14:13.391 -------------- 00:14:13.391 Get Log Page (02h): Supported 00:14:13.391 Identify (06h): Supported 00:14:13.391 Abort (08h): Supported 00:14:13.391 Set Features (09h): Supported 00:14:13.391 Get Features (0Ah): Supported 00:14:13.391 Asynchronous Event Request (0Ch): Supported 00:14:13.391 Keep Alive (18h): Supported 00:14:13.391 I/O Commands 00:14:13.391 ------------ 00:14:13.391 Flush (00h): Supported LBA-Change 00:14:13.391 Write (01h): Supported LBA-Change 00:14:13.391 Read (02h): Supported 00:14:13.391 Compare (05h): Supported 00:14:13.391 Write Zeroes (08h): Supported LBA-Change 00:14:13.391 Dataset Management (09h): Supported LBA-Change 00:14:13.391 Copy (19h): Supported LBA-Change 00:14:13.391 Unknown (79h): Supported LBA-Change 00:14:13.391 Unknown (7Ah): Supported 00:14:13.391 00:14:13.391 Error Log 00:14:13.391 ========= 00:14:13.391 00:14:13.391 Arbitration 00:14:13.391 =========== 00:14:13.391 Arbitration Burst: 1 00:14:13.391 00:14:13.391 Power Management 00:14:13.391 ================ 00:14:13.391 Number of Power States: 1 00:14:13.391 Current Power State: Power State #0 00:14:13.391 Power State #0: 00:14:13.391 Max Power: 0.00 W 00:14:13.391 Non-Operational State: Operational 00:14:13.391 Entry Latency: Not Reported 00:14:13.391 Exit Latency: Not Reported 00:14:13.391 Relative Read Throughput: 0 00:14:13.391 Relative Read Latency: 0 00:14:13.391 Relative Write Throughput: 0 00:14:13.391 Relative Write Latency: 0 00:14:13.391 Idle Power: Not Reported 00:14:13.391 Active Power: Not Reported 00:14:13.391 Non-Operational Permissive Mode: Not Supported 00:14:13.391 00:14:13.391 Health Information 00:14:13.391 ================== 00:14:13.391 Critical Warnings: 00:14:13.391 Available Spare Space: OK 00:14:13.391 Temperature: OK 00:14:13.391 Device Reliability: OK 00:14:13.391 Read Only: No 00:14:13.391 Volatile Memory Backup: OK 00:14:13.391 Current Temperature: 0 Kelvin (-2[2024-04-26 14:54:59.028997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:13.391 [2024-04-26 14:54:59.029036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:13.391 [2024-04-26 14:54:59.029091] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:13.391 [2024-04-26 14:54:59.029110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.391 [2024-04-26 14:54:59.029123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.391 [2024-04-26 14:54:59.029133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.391 [2024-04-26 14:54:59.029143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.391 [2024-04-26 14:54:59.029663] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:13.391 [2024-04-26 14:54:59.029683] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:13.391 [2024-04-26 14:54:59.030664] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:13.391 [2024-04-26 14:54:59.030748] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:13.391 [2024-04-26 14:54:59.030764] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:13.391 [2024-04-26 14:54:59.031677] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:13.392 [2024-04-26 14:54:59.031699] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:13.392 [2024-04-26 14:54:59.031751] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:13.392 [2024-04-26 14:54:59.037030] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:13.392 73 Celsius) 00:14:13.392 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:13.392 Available Spare: 0% 00:14:13.392 Available Spare Threshold: 0% 00:14:13.392 Life Percentage Used: 0% 00:14:13.392 Data Units Read: 0 00:14:13.392 Data Units Written: 0 00:14:13.392 Host Read Commands: 0 00:14:13.392 Host Write Commands: 0 00:14:13.392 Controller Busy Time: 0 minutes 00:14:13.392 Power Cycles: 0 00:14:13.392 Power On Hours: 0 hours 00:14:13.392 Unsafe Shutdowns: 0 00:14:13.392 Unrecoverable Media Errors: 0 00:14:13.392 Lifetime Error Log Entries: 0 00:14:13.392 Warning Temperature Time: 0 minutes 00:14:13.392 Critical Temperature Time: 0 minutes 00:14:13.392 00:14:13.392 Number of Queues 00:14:13.392 ================ 00:14:13.392 Number of I/O Submission Queues: 127 00:14:13.392 Number of I/O Completion Queues: 127 00:14:13.392 00:14:13.392 Active Namespaces 00:14:13.392 ================= 00:14:13.392 Namespace ID:1 00:14:13.392 Error Recovery Timeout: Unlimited 00:14:13.392 Command Set Identifier: NVM (00h) 00:14:13.392 Deallocate: Supported 00:14:13.392 Deallocated/Unwritten Error: Not Supported 00:14:13.392 Deallocated Read Value: Unknown 00:14:13.392 Deallocate in Write Zeroes: Not Supported 00:14:13.392 Deallocated Guard Field: 0xFFFF 00:14:13.392 Flush: Supported 00:14:13.392 Reservation: Supported 00:14:13.392 Namespace Sharing Capabilities: Multiple Controllers 00:14:13.392 Size (in LBAs): 131072 (0GiB) 00:14:13.392 Capacity (in LBAs): 131072 (0GiB) 00:14:13.392 Utilization (in LBAs): 131072 (0GiB) 00:14:13.392 NGUID: 6BD5E2987A1F483A9587E7655E556AE6 00:14:13.392 UUID: 6bd5e298-7a1f-483a-9587-e7655e556ae6 00:14:13.392 Thin Provisioning: Not Supported 00:14:13.392 Per-NS Atomic Units: Yes 00:14:13.392 Atomic Boundary Size (Normal): 0 00:14:13.392 Atomic Boundary Size (PFail): 0 00:14:13.392 Atomic Boundary Offset: 0 00:14:13.392 Maximum Single Source Range Length: 65535 00:14:13.392 Maximum Copy Length: 65535 00:14:13.392 Maximum Source Range Count: 1 00:14:13.392 NGUID/EUI64 Never Reused: No 00:14:13.392 Namespace Write Protected: No 00:14:13.392 Number of LBA Formats: 1 00:14:13.392 Current LBA Format: LBA Format #00 00:14:13.392 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:13.392 00:14:13.392 14:54:59 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:13.392 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.650 [2024-04-26 14:54:59.266855] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:18.963 [2024-04-26 14:55:04.285888] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:18.963 Initializing NVMe Controllers 00:14:18.963 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:18.963 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:18.963 Initialization complete. Launching workers. 00:14:18.963 ======================================================== 00:14:18.963 Latency(us) 00:14:18.963 Device Information : IOPS MiB/s Average min max 00:14:18.963 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34039.69 132.97 3759.66 1183.44 8286.45 00:14:18.963 ======================================================== 00:14:18.963 Total : 34039.69 132.97 3759.66 1183.44 8286.45 00:14:18.963 00:14:18.963 14:55:04 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:18.963 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.963 [2024-04-26 14:55:04.533090] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:24.220 [2024-04-26 14:55:09.576235] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:24.220 Initializing NVMe Controllers 00:14:24.220 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:24.221 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:24.221 Initialization complete. Launching workers. 00:14:24.221 ======================================================== 00:14:24.221 Latency(us) 00:14:24.221 Device Information : IOPS MiB/s Average min max 00:14:24.221 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16002.74 62.51 8003.87 5952.06 15968.77 00:14:24.221 ======================================================== 00:14:24.221 Total : 16002.74 62.51 8003.87 5952.06 15968.77 00:14:24.221 00:14:24.221 14:55:09 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:24.221 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.221 [2024-04-26 14:55:09.782294] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:29.482 [2024-04-26 14:55:14.848356] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:29.482 Initializing NVMe Controllers 00:14:29.482 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:29.482 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:29.482 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:29.482 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:29.482 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:29.482 Initialization complete. Launching workers. 00:14:29.482 Starting thread on core 2 00:14:29.482 Starting thread on core 3 00:14:29.482 Starting thread on core 1 00:14:29.482 14:55:14 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:29.482 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.482 [2024-04-26 14:55:15.157520] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:32.762 [2024-04-26 14:55:18.239300] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:32.762 Initializing NVMe Controllers 00:14:32.762 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:32.762 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:32.762 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:32.762 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:32.762 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:32.762 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:32.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:32.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:32.762 Initialization complete. Launching workers. 00:14:32.762 Starting thread on core 1 with urgent priority queue 00:14:32.762 Starting thread on core 2 with urgent priority queue 00:14:32.762 Starting thread on core 3 with urgent priority queue 00:14:32.762 Starting thread on core 0 with urgent priority queue 00:14:32.762 SPDK bdev Controller (SPDK1 ) core 0: 5727.33 IO/s 17.46 secs/100000 ios 00:14:32.762 SPDK bdev Controller (SPDK1 ) core 1: 5799.33 IO/s 17.24 secs/100000 ios 00:14:32.762 SPDK bdev Controller (SPDK1 ) core 2: 4884.33 IO/s 20.47 secs/100000 ios 00:14:32.762 SPDK bdev Controller (SPDK1 ) core 3: 5739.67 IO/s 17.42 secs/100000 ios 00:14:32.762 ======================================================== 00:14:32.762 00:14:32.762 14:55:18 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:32.762 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.020 [2024-04-26 14:55:18.534546] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:33.020 [2024-04-26 14:55:18.569071] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:33.020 Initializing NVMe Controllers 00:14:33.020 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:33.020 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:33.020 Namespace ID: 1 size: 0GB 00:14:33.020 Initialization complete. 00:14:33.020 INFO: using host memory buffer for IO 00:14:33.020 Hello world! 00:14:33.020 14:55:18 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:33.020 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.278 [2024-04-26 14:55:18.860466] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:34.212 Initializing NVMe Controllers 00:14:34.212 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:34.212 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:34.212 Initialization complete. Launching workers. 00:14:34.212 submit (in ns) avg, min, max = 6290.3, 3484.4, 4015291.1 00:14:34.212 complete (in ns) avg, min, max = 29598.0, 2046.7, 4996744.4 00:14:34.212 00:14:34.212 Submit histogram 00:14:34.212 ================ 00:14:34.212 Range in us Cumulative Count 00:14:34.212 3.484 - 3.508: 0.4582% ( 62) 00:14:34.212 3.508 - 3.532: 1.5519% ( 148) 00:14:34.212 3.532 - 3.556: 4.3083% ( 373) 00:14:34.212 3.556 - 3.579: 10.6710% ( 861) 00:14:34.212 3.579 - 3.603: 18.6225% ( 1076) 00:14:34.212 3.603 - 3.627: 27.7860% ( 1240) 00:14:34.212 3.627 - 3.650: 38.0210% ( 1385) 00:14:34.212 3.650 - 3.674: 45.5439% ( 1018) 00:14:34.212 3.674 - 3.698: 52.9633% ( 1004) 00:14:34.212 3.698 - 3.721: 57.5672% ( 623) 00:14:34.212 3.721 - 3.745: 61.2105% ( 493) 00:14:34.212 3.745 - 3.769: 64.0334% ( 382) 00:14:34.212 3.769 - 3.793: 67.2037% ( 429) 00:14:34.212 3.793 - 3.816: 70.4552% ( 440) 00:14:34.212 3.816 - 3.840: 74.2019% ( 507) 00:14:34.212 3.840 - 3.864: 78.3550% ( 562) 00:14:34.212 3.864 - 3.887: 82.0573% ( 501) 00:14:34.212 3.887 - 3.911: 85.0872% ( 410) 00:14:34.212 3.911 - 3.935: 87.1859% ( 284) 00:14:34.212 3.935 - 3.959: 88.7378% ( 210) 00:14:34.212 3.959 - 3.982: 90.1419% ( 190) 00:14:34.212 3.982 - 4.006: 91.1913% ( 142) 00:14:34.212 4.006 - 4.030: 91.9967% ( 109) 00:14:34.212 4.030 - 4.053: 92.9205% ( 125) 00:14:34.212 4.053 - 4.077: 93.7408% ( 111) 00:14:34.212 4.077 - 4.101: 94.5537% ( 110) 00:14:34.212 4.101 - 4.124: 95.2335% ( 92) 00:14:34.212 4.124 - 4.148: 95.7065% ( 64) 00:14:34.212 4.148 - 4.172: 96.0464% ( 46) 00:14:34.212 4.172 - 4.196: 96.3051% ( 35) 00:14:34.212 4.196 - 4.219: 96.4824% ( 24) 00:14:34.212 4.219 - 4.243: 96.6745% ( 26) 00:14:34.212 4.243 - 4.267: 96.7854% ( 15) 00:14:34.212 4.267 - 4.290: 96.8741% ( 12) 00:14:34.212 4.290 - 4.314: 96.9997% ( 17) 00:14:34.212 4.314 - 4.338: 97.0514% ( 7) 00:14:34.212 4.338 - 4.361: 97.1549% ( 14) 00:14:34.212 4.361 - 4.385: 97.2140% ( 8) 00:14:34.212 4.385 - 4.409: 97.2879% ( 10) 00:14:34.212 4.409 - 4.433: 97.3322% ( 6) 00:14:34.212 4.433 - 4.456: 97.3692% ( 5) 00:14:34.212 4.456 - 4.480: 97.3988% ( 4) 00:14:34.212 4.504 - 4.527: 97.4061% ( 1) 00:14:34.212 4.575 - 4.599: 97.4135% ( 1) 00:14:34.212 4.599 - 4.622: 97.4357% ( 3) 00:14:34.212 4.622 - 4.646: 97.4505% ( 2) 00:14:34.212 4.646 - 4.670: 97.4800% ( 4) 00:14:34.212 4.670 - 4.693: 97.5170% ( 5) 00:14:34.212 4.693 - 4.717: 97.5244% ( 1) 00:14:34.212 4.717 - 4.741: 97.5539% ( 4) 00:14:34.212 4.741 - 4.764: 97.5613% ( 1) 00:14:34.212 4.764 - 4.788: 97.5983% ( 5) 00:14:34.212 4.788 - 4.812: 97.6574% ( 8) 00:14:34.212 4.812 - 4.836: 97.7165% ( 8) 00:14:34.212 4.836 - 4.859: 97.8126% ( 13) 00:14:34.212 4.859 - 4.883: 97.8643% ( 7) 00:14:34.212 4.883 - 4.907: 97.8791% ( 2) 00:14:34.212 4.907 - 4.930: 97.8939% ( 2) 00:14:34.212 4.930 - 4.954: 97.9013% ( 1) 00:14:34.212 4.954 - 4.978: 97.9308% ( 4) 00:14:34.212 4.978 - 5.001: 97.9678% ( 5) 00:14:34.212 5.001 - 5.025: 98.0269% ( 8) 00:14:34.212 5.025 - 5.049: 98.0565% ( 4) 00:14:34.212 5.049 - 5.073: 98.1008% ( 6) 00:14:34.212 5.073 - 5.096: 98.1525% ( 7) 00:14:34.212 5.096 - 5.120: 98.1599% ( 1) 00:14:34.212 5.120 - 5.144: 98.1821% ( 3) 00:14:34.212 5.144 - 5.167: 98.1969% ( 2) 00:14:34.212 5.167 - 5.191: 98.2116% ( 2) 00:14:34.212 5.191 - 5.215: 98.2190% ( 1) 00:14:34.212 5.215 - 5.239: 98.2264% ( 1) 00:14:34.212 5.239 - 5.262: 98.2338% ( 1) 00:14:34.212 5.333 - 5.357: 98.2412% ( 1) 00:14:34.212 5.357 - 5.381: 98.2486% ( 1) 00:14:34.212 5.381 - 5.404: 98.2560% ( 1) 00:14:34.212 5.523 - 5.547: 98.2634% ( 1) 00:14:34.212 5.594 - 5.618: 98.2708% ( 1) 00:14:34.212 5.879 - 5.902: 98.2782% ( 1) 00:14:34.212 6.258 - 6.305: 98.2855% ( 1) 00:14:34.212 6.305 - 6.353: 98.2929% ( 1) 00:14:34.212 6.400 - 6.447: 98.3003% ( 1) 00:14:34.212 6.542 - 6.590: 98.3077% ( 1) 00:14:34.212 6.590 - 6.637: 98.3151% ( 1) 00:14:34.212 6.637 - 6.684: 98.3225% ( 1) 00:14:34.212 6.684 - 6.732: 98.3373% ( 2) 00:14:34.212 6.732 - 6.779: 98.3447% ( 1) 00:14:34.212 6.779 - 6.827: 98.3594% ( 2) 00:14:34.212 6.874 - 6.921: 98.3668% ( 1) 00:14:34.212 6.921 - 6.969: 98.3742% ( 1) 00:14:34.212 7.016 - 7.064: 98.3816% ( 1) 00:14:34.212 7.064 - 7.111: 98.3890% ( 1) 00:14:34.212 7.111 - 7.159: 98.3964% ( 1) 00:14:34.212 7.159 - 7.206: 98.4112% ( 2) 00:14:34.212 7.206 - 7.253: 98.4407% ( 4) 00:14:34.212 7.253 - 7.301: 98.4481% ( 1) 00:14:34.212 7.301 - 7.348: 98.4629% ( 2) 00:14:34.212 7.396 - 7.443: 98.4851% ( 3) 00:14:34.212 7.538 - 7.585: 98.4999% ( 2) 00:14:34.212 7.633 - 7.680: 98.5072% ( 1) 00:14:34.212 7.680 - 7.727: 98.5294% ( 3) 00:14:34.212 7.775 - 7.822: 98.5590% ( 4) 00:14:34.212 7.822 - 7.870: 98.5664% ( 1) 00:14:34.212 7.917 - 7.964: 98.5885% ( 3) 00:14:34.212 8.012 - 8.059: 98.5959% ( 1) 00:14:34.212 8.059 - 8.107: 98.6033% ( 1) 00:14:34.212 8.107 - 8.154: 98.6181% ( 2) 00:14:34.212 8.154 - 8.201: 98.6255% ( 1) 00:14:34.212 8.249 - 8.296: 98.6477% ( 3) 00:14:34.212 8.486 - 8.533: 98.6624% ( 2) 00:14:34.212 8.533 - 8.581: 98.6772% ( 2) 00:14:34.212 8.581 - 8.628: 98.6846% ( 1) 00:14:34.212 8.676 - 8.723: 98.6994% ( 2) 00:14:34.212 9.102 - 9.150: 98.7068% ( 1) 00:14:34.212 9.244 - 9.292: 98.7142% ( 1) 00:14:34.212 9.292 - 9.339: 98.7215% ( 1) 00:14:34.212 9.434 - 9.481: 98.7289% ( 1) 00:14:34.212 9.671 - 9.719: 98.7363% ( 1) 00:14:34.212 9.908 - 9.956: 98.7437% ( 1) 00:14:34.212 10.003 - 10.050: 98.7511% ( 1) 00:14:34.212 10.098 - 10.145: 98.7585% ( 1) 00:14:34.212 10.145 - 10.193: 98.7733% ( 2) 00:14:34.212 10.240 - 10.287: 98.7807% ( 1) 00:14:34.212 10.572 - 10.619: 98.7881% ( 1) 00:14:34.212 10.667 - 10.714: 98.7954% ( 1) 00:14:34.212 10.714 - 10.761: 98.8028% ( 1) 00:14:34.212 11.188 - 11.236: 98.8102% ( 1) 00:14:34.212 11.236 - 11.283: 98.8176% ( 1) 00:14:34.212 11.473 - 11.520: 98.8250% ( 1) 00:14:34.212 11.520 - 11.567: 98.8398% ( 2) 00:14:34.212 11.615 - 11.662: 98.8546% ( 2) 00:14:34.212 11.757 - 11.804: 98.8620% ( 1) 00:14:34.212 11.852 - 11.899: 98.8693% ( 1) 00:14:34.212 11.947 - 11.994: 98.8767% ( 1) 00:14:34.212 12.136 - 12.231: 98.8841% ( 1) 00:14:34.212 12.800 - 12.895: 98.8915% ( 1) 00:14:34.212 12.990 - 13.084: 98.9063% ( 2) 00:14:34.212 13.084 - 13.179: 98.9211% ( 2) 00:14:34.212 14.601 - 14.696: 98.9285% ( 1) 00:14:34.212 14.696 - 14.791: 98.9359% ( 1) 00:14:34.212 17.067 - 17.161: 98.9580% ( 3) 00:14:34.212 17.161 - 17.256: 98.9654% ( 1) 00:14:34.212 17.256 - 17.351: 98.9802% ( 2) 00:14:34.212 17.351 - 17.446: 99.0024% ( 3) 00:14:34.212 17.446 - 17.541: 99.0245% ( 3) 00:14:34.212 17.541 - 17.636: 99.0541% ( 4) 00:14:34.212 17.636 - 17.730: 99.0984% ( 6) 00:14:34.212 17.730 - 17.825: 99.1576% ( 8) 00:14:34.212 17.825 - 17.920: 99.1797% ( 3) 00:14:34.212 17.920 - 18.015: 99.2388% ( 8) 00:14:34.212 18.015 - 18.110: 99.2684% ( 4) 00:14:34.212 18.110 - 18.204: 99.3275% ( 8) 00:14:34.212 18.204 - 18.299: 99.3866% ( 8) 00:14:34.212 18.299 - 18.394: 99.4679% ( 11) 00:14:34.212 18.394 - 18.489: 99.5640% ( 13) 00:14:34.213 18.489 - 18.584: 99.6453% ( 11) 00:14:34.213 18.584 - 18.679: 99.7044% ( 8) 00:14:34.213 18.679 - 18.773: 99.7192% ( 2) 00:14:34.213 18.773 - 18.868: 99.7783% ( 8) 00:14:34.213 18.868 - 18.963: 99.7857% ( 1) 00:14:34.213 18.963 - 19.058: 99.8005% ( 2) 00:14:34.213 19.058 - 19.153: 99.8300% ( 4) 00:14:34.213 19.153 - 19.247: 99.8448% ( 2) 00:14:34.213 19.247 - 19.342: 99.8522% ( 1) 00:14:34.213 19.342 - 19.437: 99.8818% ( 4) 00:14:34.213 19.532 - 19.627: 99.8892% ( 1) 00:14:34.213 19.627 - 19.721: 99.9039% ( 2) 00:14:34.213 19.721 - 19.816: 99.9113% ( 1) 00:14:34.213 21.523 - 21.618: 99.9187% ( 1) 00:14:34.213 21.902 - 21.997: 99.9261% ( 1) 00:14:34.213 24.462 - 24.652: 99.9335% ( 1) 00:14:34.213 28.634 - 28.824: 99.9409% ( 1) 00:14:34.213 3980.705 - 4004.978: 99.9852% ( 6) 00:14:34.213 4004.978 - 4029.250: 100.0000% ( 2) 00:14:34.213 00:14:34.213 Complete histogram 00:14:34.213 ================== 00:14:34.213 Range in us Cumulative Count 00:14:34.213 2.039 - 2.050: 0.1182% ( 16) 00:14:34.213 2.050 - 2.062: 6.2518% ( 830) 00:14:34.213 2.062 - 2.074: 13.3979% ( 967) 00:14:34.213 2.074 - 2.086: 17.4697% ( 551) 00:14:34.213 2.086 - 2.098: 45.1079% ( 3740) 00:14:34.213 2.098 - 2.110: 59.6512% ( 1968) 00:14:34.213 2.110 - 2.121: 62.8289% ( 430) 00:14:34.213 2.121 - 2.133: 66.4942% ( 496) 00:14:34.213 2.133 - 2.145: 68.0387% ( 209) 00:14:34.213 2.145 - 2.157: 70.2779% ( 303) 00:14:34.213 2.157 - 2.169: 78.2737% ( 1082) 00:14:34.213 2.169 - 2.181: 81.5770% ( 447) 00:14:34.213 2.181 - 2.193: 82.7224% ( 155) 00:14:34.213 2.193 - 2.204: 84.1856% ( 198) 00:14:34.213 2.204 - 2.216: 85.3532% ( 158) 00:14:34.213 2.216 - 2.228: 86.6021% ( 169) 00:14:34.213 2.228 - 2.240: 90.4966% ( 527) 00:14:34.213 2.240 - 2.252: 93.2456% ( 372) 00:14:34.213 2.252 - 2.264: 93.9329% ( 93) 00:14:34.213 2.264 - 2.276: 94.3911% ( 62) 00:14:34.213 2.276 - 2.287: 94.6793% ( 39) 00:14:34.213 2.287 - 2.299: 94.8566% ( 24) 00:14:34.213 2.299 - 2.311: 95.0931% ( 32) 00:14:34.213 2.311 - 2.323: 95.4109% ( 43) 00:14:34.213 2.323 - 2.335: 95.6104% ( 27) 00:14:34.213 2.335 - 2.347: 95.7730% ( 22) 00:14:34.213 2.347 - 2.359: 95.9356% ( 22) 00:14:34.213 2.359 - 2.370: 96.3494% ( 56) 00:14:34.213 2.370 - 2.382: 96.6598% ( 42) 00:14:34.213 2.382 - 2.394: 97.0145% ( 48) 00:14:34.213 2.394 - 2.406: 97.4727% ( 62) 00:14:34.213 2.406 - 2.418: 97.6722% ( 27) 00:14:34.213 2.418 - 2.430: 97.9013% ( 31) 00:14:34.213 2.430 - 2.441: 97.9899% ( 12) 00:14:34.213 2.441 - 2.453: 98.1230% ( 18) 00:14:34.213 2.453 - 2.465: 98.2116% ( 12) 00:14:34.213 2.465 - 2.477: 98.2708% ( 8) 00:14:34.213 2.477 - 2.489: 98.3151% ( 6) 00:14:34.213 2.489 - 2.501: 98.3742% ( 8) 00:14:34.213 2.501 - 2.513: 98.4038% ( 4) 00:14:34.213 2.513 - 2.524: 98.4260% ( 3) 00:14:34.213 2.524 - 2.536: 98.4481% ( 3) 00:14:34.213 2.536 - 2.548: 98.4629% ( 2) 00:14:34.213 2.560 - 2.572: 98.4777% ( 2) 00:14:34.213 2.572 - 2.584: 98.4851% ( 1) 00:14:34.213 2.584 - 2.596: 98.4925% ( 1) 00:14:34.213 2.596 - 2.607: 98.4999% ( 1) 00:14:34.213 2.607 - 2.619: 98.5146% ( 2) 00:14:34.213 2.619 - 2.631: 98.5294% ( 2) 00:14:34.213 2.643 - 2.655: 98.5368% ( 1) 00:14:34.213 2.667 - 2.679: 98.5442% ( 1) 00:14:34.213 2.785 - 2.797: 98.5516% ( 1) 00:14:34.213 3.224 - 3.247: 98.5590% ( 1) 00:14:34.213 3.247 - 3.271: 98.5664% ( 1) 00:14:34.213 3.319 - 3.342: 98.5738% ( 1) 00:14:34.213 3.342 - 3.366: 98.6107% ( 5) 00:14:34.213 3.390 - 3.413: 98.6181% ( 1) 00:14:34.213 3.413 - 3.437: 98.6329% ( 2) 00:14:34.213 3.461 - 3.484: 98.6477% ( 2) 00:14:34.213 3.484 - 3.508: 98.6624% ( 2) 00:14:34.213 3.556 - 3.579: 98.6698% ( 1) 00:14:34.213 3.627 - 3.650: 98.6772% ( 1) 00:14:34.213 3.698 - 3.721: 98.6846% ( 1) 00:14:34.213 3.721 - 3.745: 98.6920% ( 1) 00:14:34.213 3.745 - 3.769: 98.6994% ( 1) 00:14:34.213 3.840 - 3.864: 98.7068% ( 1) 00:14:34.213 3.982 - 4.006: 98.7142% ( 1) 00:14:34.213 4.101 - 4.124: 98.7215% ( 1) 00:14:34.213 5.001 - 5.025: 98.7289% ( 1) 00:14:34.213 5.570 - 5.594: 98.7363% ( 1) 00:14:34.213 5.594 - 5.618: 98.7437% ( 1) 00:14:34.213 5.665 - 5.689: 98.7511% ( 1) 00:14:34.213 5.713 - 5.736: 98.7585% ( 1) 00:14:34.213 5.902 - 5.926: 98.7659% ( 1) 00:14:34.213 5.926 - 5.950: 98.7733% ( 1) 00:14:34.213 5.997 - 6.021: 98.7807% ( 1) 00:14:34.213 6.068 - 6.116: 98.7954% ( 2) 00:14:34.213 6.116 - 6.163: 98.8102% ( 2) 00:14:34.213 6.495 - 6.542: 9[2024-04-26 14:55:19.881744] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:34.213 8.8176% ( 1) 00:14:34.213 7.016 - 7.064: 98.8250% ( 1) 00:14:34.213 7.538 - 7.585: 98.8324% ( 1) 00:14:34.213 8.154 - 8.201: 98.8398% ( 1) 00:14:34.213 10.999 - 11.046: 98.8472% ( 1) 00:14:34.213 15.644 - 15.739: 98.8620% ( 2) 00:14:34.213 15.739 - 15.834: 98.8841% ( 3) 00:14:34.213 15.834 - 15.929: 98.8989% ( 2) 00:14:34.213 15.929 - 16.024: 98.9137% ( 2) 00:14:34.213 16.024 - 16.119: 98.9359% ( 3) 00:14:34.213 16.119 - 16.213: 98.9654% ( 4) 00:14:34.213 16.213 - 16.308: 99.0024% ( 5) 00:14:34.213 16.308 - 16.403: 99.0245% ( 3) 00:14:34.213 16.403 - 16.498: 99.0467% ( 3) 00:14:34.213 16.498 - 16.593: 99.0984% ( 7) 00:14:34.213 16.593 - 16.687: 99.1354% ( 5) 00:14:34.213 16.687 - 16.782: 99.2019% ( 9) 00:14:34.213 16.782 - 16.877: 99.2536% ( 7) 00:14:34.213 16.877 - 16.972: 99.2610% ( 1) 00:14:34.213 17.067 - 17.161: 99.2758% ( 2) 00:14:34.213 17.161 - 17.256: 99.2832% ( 1) 00:14:34.213 17.256 - 17.351: 99.2980% ( 2) 00:14:34.213 17.446 - 17.541: 99.3054% ( 1) 00:14:34.213 18.584 - 18.679: 99.3127% ( 1) 00:14:34.213 2366.578 - 2378.714: 99.3201% ( 1) 00:14:34.213 3203.982 - 3228.255: 99.3275% ( 1) 00:14:34.213 3980.705 - 4004.978: 99.8079% ( 65) 00:14:34.213 4004.978 - 4029.250: 99.9926% ( 25) 00:14:34.213 4975.881 - 5000.154: 100.0000% ( 1) 00:14:34.213 00:14:34.213 14:55:19 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:34.213 14:55:19 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:34.213 14:55:19 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:34.213 14:55:19 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:34.213 14:55:19 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:34.471 [2024-04-26 14:55:20.153247] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:14:34.471 [ 00:14:34.471 { 00:14:34.471 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:34.471 "subtype": "Discovery", 00:14:34.471 "listen_addresses": [], 00:14:34.471 "allow_any_host": true, 00:14:34.471 "hosts": [] 00:14:34.471 }, 00:14:34.471 { 00:14:34.471 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:34.471 "subtype": "NVMe", 00:14:34.471 "listen_addresses": [ 00:14:34.471 { 00:14:34.471 "transport": "VFIOUSER", 00:14:34.471 "trtype": "VFIOUSER", 00:14:34.471 "adrfam": "IPv4", 00:14:34.471 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:34.471 "trsvcid": "0" 00:14:34.471 } 00:14:34.471 ], 00:14:34.471 "allow_any_host": true, 00:14:34.471 "hosts": [], 00:14:34.471 "serial_number": "SPDK1", 00:14:34.471 "model_number": "SPDK bdev Controller", 00:14:34.471 "max_namespaces": 32, 00:14:34.471 "min_cntlid": 1, 00:14:34.471 "max_cntlid": 65519, 00:14:34.471 "namespaces": [ 00:14:34.471 { 00:14:34.471 "nsid": 1, 00:14:34.471 "bdev_name": "Malloc1", 00:14:34.471 "name": "Malloc1", 00:14:34.471 "nguid": "6BD5E2987A1F483A9587E7655E556AE6", 00:14:34.471 "uuid": "6bd5e298-7a1f-483a-9587-e7655e556ae6" 00:14:34.471 } 00:14:34.471 ] 00:14:34.471 }, 00:14:34.471 { 00:14:34.471 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:34.471 "subtype": "NVMe", 00:14:34.471 "listen_addresses": [ 00:14:34.471 { 00:14:34.471 "transport": "VFIOUSER", 00:14:34.471 "trtype": "VFIOUSER", 00:14:34.471 "adrfam": "IPv4", 00:14:34.471 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:34.471 "trsvcid": "0" 00:14:34.471 } 00:14:34.471 ], 00:14:34.471 "allow_any_host": true, 00:14:34.471 "hosts": [], 00:14:34.471 "serial_number": "SPDK2", 00:14:34.471 "model_number": "SPDK bdev Controller", 00:14:34.471 "max_namespaces": 32, 00:14:34.471 "min_cntlid": 1, 00:14:34.471 "max_cntlid": 65519, 00:14:34.471 "namespaces": [ 00:14:34.471 { 00:14:34.471 "nsid": 1, 00:14:34.471 "bdev_name": "Malloc2", 00:14:34.471 "name": "Malloc2", 00:14:34.471 "nguid": "14A074BBE87143B3AC9153C84A670928", 00:14:34.471 "uuid": "14a074bb-e871-43b3-ac91-53c84a670928" 00:14:34.471 } 00:14:34.471 ] 00:14:34.471 } 00:14:34.471 ] 00:14:34.471 14:55:20 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:34.471 14:55:20 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3744375 00:14:34.471 14:55:20 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:34.471 14:55:20 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:34.471 14:55:20 -- common/autotest_common.sh@1251 -- # local i=0 00:14:34.471 14:55:20 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:34.471 14:55:20 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:34.471 14:55:20 -- common/autotest_common.sh@1262 -- # return 0 00:14:34.471 14:55:20 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:34.471 14:55:20 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:34.729 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.729 [2024-04-26 14:55:20.332604] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:34.729 Malloc3 00:14:34.729 14:55:20 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:34.988 [2024-04-26 14:55:20.688332] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:34.988 14:55:20 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:35.245 Asynchronous Event Request test 00:14:35.245 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:35.245 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:35.245 Registering asynchronous event callbacks... 00:14:35.245 Starting namespace attribute notice tests for all controllers... 00:14:35.246 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:35.246 aer_cb - Changed Namespace 00:14:35.246 Cleaning up... 00:14:35.246 [ 00:14:35.246 { 00:14:35.246 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:35.246 "subtype": "Discovery", 00:14:35.246 "listen_addresses": [], 00:14:35.246 "allow_any_host": true, 00:14:35.246 "hosts": [] 00:14:35.246 }, 00:14:35.246 { 00:14:35.246 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:35.246 "subtype": "NVMe", 00:14:35.246 "listen_addresses": [ 00:14:35.246 { 00:14:35.246 "transport": "VFIOUSER", 00:14:35.246 "trtype": "VFIOUSER", 00:14:35.246 "adrfam": "IPv4", 00:14:35.246 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:35.246 "trsvcid": "0" 00:14:35.246 } 00:14:35.246 ], 00:14:35.246 "allow_any_host": true, 00:14:35.246 "hosts": [], 00:14:35.246 "serial_number": "SPDK1", 00:14:35.246 "model_number": "SPDK bdev Controller", 00:14:35.246 "max_namespaces": 32, 00:14:35.246 "min_cntlid": 1, 00:14:35.246 "max_cntlid": 65519, 00:14:35.246 "namespaces": [ 00:14:35.246 { 00:14:35.246 "nsid": 1, 00:14:35.246 "bdev_name": "Malloc1", 00:14:35.246 "name": "Malloc1", 00:14:35.246 "nguid": "6BD5E2987A1F483A9587E7655E556AE6", 00:14:35.246 "uuid": "6bd5e298-7a1f-483a-9587-e7655e556ae6" 00:14:35.246 }, 00:14:35.246 { 00:14:35.246 "nsid": 2, 00:14:35.246 "bdev_name": "Malloc3", 00:14:35.246 "name": "Malloc3", 00:14:35.246 "nguid": "620F4C42C4BD4A089AD0B8EB7EB1E3C9", 00:14:35.246 "uuid": "620f4c42-c4bd-4a08-9ad0-b8eb7eb1e3c9" 00:14:35.246 } 00:14:35.246 ] 00:14:35.246 }, 00:14:35.246 { 00:14:35.246 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:35.246 "subtype": "NVMe", 00:14:35.246 "listen_addresses": [ 00:14:35.246 { 00:14:35.246 "transport": "VFIOUSER", 00:14:35.246 "trtype": "VFIOUSER", 00:14:35.246 "adrfam": "IPv4", 00:14:35.246 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:35.246 "trsvcid": "0" 00:14:35.246 } 00:14:35.246 ], 00:14:35.246 "allow_any_host": true, 00:14:35.246 "hosts": [], 00:14:35.246 "serial_number": "SPDK2", 00:14:35.246 "model_number": "SPDK bdev Controller", 00:14:35.246 "max_namespaces": 32, 00:14:35.246 "min_cntlid": 1, 00:14:35.246 "max_cntlid": 65519, 00:14:35.246 "namespaces": [ 00:14:35.246 { 00:14:35.246 "nsid": 1, 00:14:35.246 "bdev_name": "Malloc2", 00:14:35.246 "name": "Malloc2", 00:14:35.246 "nguid": "14A074BBE87143B3AC9153C84A670928", 00:14:35.246 "uuid": "14a074bb-e871-43b3-ac91-53c84a670928" 00:14:35.246 } 00:14:35.246 ] 00:14:35.246 } 00:14:35.246 ] 00:14:35.246 14:55:20 -- target/nvmf_vfio_user.sh@44 -- # wait 3744375 00:14:35.246 14:55:20 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:35.246 14:55:20 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:35.246 14:55:20 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:35.246 14:55:20 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:35.246 [2024-04-26 14:55:20.977030] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:14:35.246 [2024-04-26 14:55:20.977072] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3744502 ] 00:14:35.506 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.506 [2024-04-26 14:55:20.992633] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:35.506 [2024-04-26 14:55:21.010229] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:35.506 [2024-04-26 14:55:21.019310] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:35.506 [2024-04-26 14:55:21.019354] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff023f34000 00:14:35.506 [2024-04-26 14:55:21.020308] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.506 [2024-04-26 14:55:21.021312] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.506 [2024-04-26 14:55:21.022313] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.506 [2024-04-26 14:55:21.023332] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:35.506 [2024-04-26 14:55:21.024341] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:35.506 [2024-04-26 14:55:21.025348] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.506 [2024-04-26 14:55:21.026356] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:35.506 [2024-04-26 14:55:21.027349] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.506 [2024-04-26 14:55:21.028360] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:35.506 [2024-04-26 14:55:21.028383] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff022ce5000 00:14:35.506 [2024-04-26 14:55:21.029498] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:35.506 [2024-04-26 14:55:21.044699] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:35.506 [2024-04-26 14:55:21.044734] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:35.506 [2024-04-26 14:55:21.049846] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:35.506 [2024-04-26 14:55:21.049898] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:35.506 [2024-04-26 14:55:21.049984] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:35.506 [2024-04-26 14:55:21.050029] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:35.506 [2024-04-26 14:55:21.050041] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:35.506 [2024-04-26 14:55:21.050851] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:35.506 [2024-04-26 14:55:21.050870] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:35.506 [2024-04-26 14:55:21.050883] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:35.506 [2024-04-26 14:55:21.051852] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:35.506 [2024-04-26 14:55:21.051871] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:35.506 [2024-04-26 14:55:21.051884] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:35.506 [2024-04-26 14:55:21.052859] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:35.506 [2024-04-26 14:55:21.052879] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:35.506 [2024-04-26 14:55:21.053859] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:35.506 [2024-04-26 14:55:21.053878] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:35.506 [2024-04-26 14:55:21.053887] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:35.506 [2024-04-26 14:55:21.053899] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:35.506 [2024-04-26 14:55:21.054015] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:35.506 [2024-04-26 14:55:21.054029] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:35.506 [2024-04-26 14:55:21.054039] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:35.506 [2024-04-26 14:55:21.054865] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:35.506 [2024-04-26 14:55:21.055867] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:35.506 [2024-04-26 14:55:21.056882] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:35.506 [2024-04-26 14:55:21.057871] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:35.506 [2024-04-26 14:55:21.057944] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:35.506 [2024-04-26 14:55:21.058886] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:35.506 [2024-04-26 14:55:21.058905] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:35.506 [2024-04-26 14:55:21.058915] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:35.506 [2024-04-26 14:55:21.058938] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:35.506 [2024-04-26 14:55:21.058951] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:35.506 [2024-04-26 14:55:21.058972] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:35.506 [2024-04-26 14:55:21.058982] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:35.506 [2024-04-26 14:55:21.059013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:35.506 [2024-04-26 14:55:21.067035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:35.506 [2024-04-26 14:55:21.067058] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:35.506 [2024-04-26 14:55:21.067067] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:35.506 [2024-04-26 14:55:21.067076] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:35.506 [2024-04-26 14:55:21.067084] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:35.506 [2024-04-26 14:55:21.067092] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:35.506 [2024-04-26 14:55:21.067100] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:35.506 [2024-04-26 14:55:21.067108] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:35.506 [2024-04-26 14:55:21.067121] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:35.506 [2024-04-26 14:55:21.067137] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:35.506 [2024-04-26 14:55:21.075029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:35.506 [2024-04-26 14:55:21.075057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.506 [2024-04-26 14:55:21.075087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.506 [2024-04-26 14:55:21.075101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.506 [2024-04-26 14:55:21.075113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.507 [2024-04-26 14:55:21.075123] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:35.507 [2024-04-26 14:55:21.075142] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:35.507 [2024-04-26 14:55:21.075158] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:35.507 [2024-04-26 14:55:21.083031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:35.507 [2024-04-26 14:55:21.083050] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:35.507 [2024-04-26 14:55:21.083060] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:35.507 [2024-04-26 14:55:21.083076] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:35.507 [2024-04-26 14:55:21.083087] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:35.507 [2024-04-26 14:55:21.083102] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:35.507 [2024-04-26 14:55:21.091031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:35.507 [2024-04-26 14:55:21.091094] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:35.507 [2024-04-26 14:55:21.091109] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:35.507 [2024-04-26 14:55:21.091122] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:35.507 [2024-04-26 14:55:21.091131] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:35.507 [2024-04-26 14:55:21.091141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:35.507 [2024-04-26 14:55:21.096054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:35.507 [2024-04-26 14:55:21.096091] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:35.507 [2024-04-26 14:55:21.096112] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:35.507 [2024-04-26 14:55:21.096127] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:35.507 [2024-04-26 14:55:21.096140] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:35.507 [2024-04-26 14:55:21.096149] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:35.507 [2024-04-26 14:55:21.096159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:35.507 [2024-04-26 14:55:21.107029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:35.507 [2024-04-26 14:55:21.107057] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:35.507 [2024-04-26 14:55:21.107073] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:35.507 [2024-04-26 14:55:21.107086] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:35.507 [2024-04-26 14:55:21.107098] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:35.507 [2024-04-26 14:55:21.107109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:35.507 [2024-04-26 14:55:21.115030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:35.507 [2024-04-26 14:55:21.115050] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:35.507 [2024-04-26 14:55:21.115064] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:35.507 [2024-04-26 14:55:21.115078] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:35.507 [2024-04-26 14:55:21.115088] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:35.507 [2024-04-26 14:55:21.115097] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:35.507 [2024-04-26 14:55:21.115106] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:35.507 [2024-04-26 14:55:21.115113] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:35.507 [2024-04-26 14:55:21.115122] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:35.507 [2024-04-26 14:55:21.115147] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:35.507 [2024-04-26 14:55:21.123045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:35.507 [2024-04-26 14:55:21.123072] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:35.507 [2024-04-26 14:55:21.131045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:35.507 [2024-04-26 14:55:21.131070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:35.507 [2024-04-26 14:55:21.139043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:35.507 [2024-04-26 14:55:21.139075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:35.507 [2024-04-26 14:55:21.147029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:35.507 [2024-04-26 14:55:21.147082] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:35.507 [2024-04-26 14:55:21.147093] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:35.507 [2024-04-26 14:55:21.147100] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:35.507 [2024-04-26 14:55:21.147107] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:35.507 [2024-04-26 14:55:21.147117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:35.507 [2024-04-26 14:55:21.147129] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:35.507 [2024-04-26 14:55:21.147137] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:35.507 [2024-04-26 14:55:21.147147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:35.507 [2024-04-26 14:55:21.147163] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:35.507 [2024-04-26 14:55:21.147172] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:35.507 [2024-04-26 14:55:21.147181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:35.507 [2024-04-26 14:55:21.147194] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:35.507 [2024-04-26 14:55:21.147202] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:35.507 [2024-04-26 14:55:21.147211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:35.507 [2024-04-26 14:55:21.155029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:35.507 [2024-04-26 14:55:21.155059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:35.507 [2024-04-26 14:55:21.155076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:35.507 [2024-04-26 14:55:21.155088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:35.507 ===================================================== 00:14:35.507 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:35.507 ===================================================== 00:14:35.507 Controller Capabilities/Features 00:14:35.507 ================================ 00:14:35.507 Vendor ID: 4e58 00:14:35.507 Subsystem Vendor ID: 4e58 00:14:35.507 Serial Number: SPDK2 00:14:35.507 Model Number: SPDK bdev Controller 00:14:35.507 Firmware Version: 24.05 00:14:35.507 Recommended Arb Burst: 6 00:14:35.507 IEEE OUI Identifier: 8d 6b 50 00:14:35.507 Multi-path I/O 00:14:35.507 May have multiple subsystem ports: Yes 00:14:35.507 May have multiple controllers: Yes 00:14:35.507 Associated with SR-IOV VF: No 00:14:35.507 Max Data Transfer Size: 131072 00:14:35.507 Max Number of Namespaces: 32 00:14:35.507 Max Number of I/O Queues: 127 00:14:35.507 NVMe Specification Version (VS): 1.3 00:14:35.507 NVMe Specification Version (Identify): 1.3 00:14:35.507 Maximum Queue Entries: 256 00:14:35.507 Contiguous Queues Required: Yes 00:14:35.507 Arbitration Mechanisms Supported 00:14:35.507 Weighted Round Robin: Not Supported 00:14:35.507 Vendor Specific: Not Supported 00:14:35.507 Reset Timeout: 15000 ms 00:14:35.507 Doorbell Stride: 4 bytes 00:14:35.507 NVM Subsystem Reset: Not Supported 00:14:35.507 Command Sets Supported 00:14:35.507 NVM Command Set: Supported 00:14:35.507 Boot Partition: Not Supported 00:14:35.507 Memory Page Size Minimum: 4096 bytes 00:14:35.507 Memory Page Size Maximum: 4096 bytes 00:14:35.507 Persistent Memory Region: Not Supported 00:14:35.507 Optional Asynchronous Events Supported 00:14:35.507 Namespace Attribute Notices: Supported 00:14:35.507 Firmware Activation Notices: Not Supported 00:14:35.507 ANA Change Notices: Not Supported 00:14:35.507 PLE Aggregate Log Change Notices: Not Supported 00:14:35.507 LBA Status Info Alert Notices: Not Supported 00:14:35.507 EGE Aggregate Log Change Notices: Not Supported 00:14:35.507 Normal NVM Subsystem Shutdown event: Not Supported 00:14:35.507 Zone Descriptor Change Notices: Not Supported 00:14:35.507 Discovery Log Change Notices: Not Supported 00:14:35.507 Controller Attributes 00:14:35.508 128-bit Host Identifier: Supported 00:14:35.508 Non-Operational Permissive Mode: Not Supported 00:14:35.508 NVM Sets: Not Supported 00:14:35.508 Read Recovery Levels: Not Supported 00:14:35.508 Endurance Groups: Not Supported 00:14:35.508 Predictable Latency Mode: Not Supported 00:14:35.508 Traffic Based Keep ALive: Not Supported 00:14:35.508 Namespace Granularity: Not Supported 00:14:35.508 SQ Associations: Not Supported 00:14:35.508 UUID List: Not Supported 00:14:35.508 Multi-Domain Subsystem: Not Supported 00:14:35.508 Fixed Capacity Management: Not Supported 00:14:35.508 Variable Capacity Management: Not Supported 00:14:35.508 Delete Endurance Group: Not Supported 00:14:35.508 Delete NVM Set: Not Supported 00:14:35.508 Extended LBA Formats Supported: Not Supported 00:14:35.508 Flexible Data Placement Supported: Not Supported 00:14:35.508 00:14:35.508 Controller Memory Buffer Support 00:14:35.508 ================================ 00:14:35.508 Supported: No 00:14:35.508 00:14:35.508 Persistent Memory Region Support 00:14:35.508 ================================ 00:14:35.508 Supported: No 00:14:35.508 00:14:35.508 Admin Command Set Attributes 00:14:35.508 ============================ 00:14:35.508 Security Send/Receive: Not Supported 00:14:35.508 Format NVM: Not Supported 00:14:35.508 Firmware Activate/Download: Not Supported 00:14:35.508 Namespace Management: Not Supported 00:14:35.508 Device Self-Test: Not Supported 00:14:35.508 Directives: Not Supported 00:14:35.508 NVMe-MI: Not Supported 00:14:35.508 Virtualization Management: Not Supported 00:14:35.508 Doorbell Buffer Config: Not Supported 00:14:35.508 Get LBA Status Capability: Not Supported 00:14:35.508 Command & Feature Lockdown Capability: Not Supported 00:14:35.508 Abort Command Limit: 4 00:14:35.508 Async Event Request Limit: 4 00:14:35.508 Number of Firmware Slots: N/A 00:14:35.508 Firmware Slot 1 Read-Only: N/A 00:14:35.508 Firmware Activation Without Reset: N/A 00:14:35.508 Multiple Update Detection Support: N/A 00:14:35.508 Firmware Update Granularity: No Information Provided 00:14:35.508 Per-Namespace SMART Log: No 00:14:35.508 Asymmetric Namespace Access Log Page: Not Supported 00:14:35.508 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:35.508 Command Effects Log Page: Supported 00:14:35.508 Get Log Page Extended Data: Supported 00:14:35.508 Telemetry Log Pages: Not Supported 00:14:35.508 Persistent Event Log Pages: Not Supported 00:14:35.508 Supported Log Pages Log Page: May Support 00:14:35.508 Commands Supported & Effects Log Page: Not Supported 00:14:35.508 Feature Identifiers & Effects Log Page:May Support 00:14:35.508 NVMe-MI Commands & Effects Log Page: May Support 00:14:35.508 Data Area 4 for Telemetry Log: Not Supported 00:14:35.508 Error Log Page Entries Supported: 128 00:14:35.508 Keep Alive: Supported 00:14:35.508 Keep Alive Granularity: 10000 ms 00:14:35.508 00:14:35.508 NVM Command Set Attributes 00:14:35.508 ========================== 00:14:35.508 Submission Queue Entry Size 00:14:35.508 Max: 64 00:14:35.508 Min: 64 00:14:35.508 Completion Queue Entry Size 00:14:35.508 Max: 16 00:14:35.508 Min: 16 00:14:35.508 Number of Namespaces: 32 00:14:35.508 Compare Command: Supported 00:14:35.508 Write Uncorrectable Command: Not Supported 00:14:35.508 Dataset Management Command: Supported 00:14:35.508 Write Zeroes Command: Supported 00:14:35.508 Set Features Save Field: Not Supported 00:14:35.508 Reservations: Not Supported 00:14:35.508 Timestamp: Not Supported 00:14:35.508 Copy: Supported 00:14:35.508 Volatile Write Cache: Present 00:14:35.508 Atomic Write Unit (Normal): 1 00:14:35.508 Atomic Write Unit (PFail): 1 00:14:35.508 Atomic Compare & Write Unit: 1 00:14:35.508 Fused Compare & Write: Supported 00:14:35.508 Scatter-Gather List 00:14:35.508 SGL Command Set: Supported (Dword aligned) 00:14:35.508 SGL Keyed: Not Supported 00:14:35.508 SGL Bit Bucket Descriptor: Not Supported 00:14:35.508 SGL Metadata Pointer: Not Supported 00:14:35.508 Oversized SGL: Not Supported 00:14:35.508 SGL Metadata Address: Not Supported 00:14:35.508 SGL Offset: Not Supported 00:14:35.508 Transport SGL Data Block: Not Supported 00:14:35.508 Replay Protected Memory Block: Not Supported 00:14:35.508 00:14:35.508 Firmware Slot Information 00:14:35.508 ========================= 00:14:35.508 Active slot: 1 00:14:35.508 Slot 1 Firmware Revision: 24.05 00:14:35.508 00:14:35.508 00:14:35.508 Commands Supported and Effects 00:14:35.508 ============================== 00:14:35.508 Admin Commands 00:14:35.508 -------------- 00:14:35.508 Get Log Page (02h): Supported 00:14:35.508 Identify (06h): Supported 00:14:35.508 Abort (08h): Supported 00:14:35.508 Set Features (09h): Supported 00:14:35.508 Get Features (0Ah): Supported 00:14:35.508 Asynchronous Event Request (0Ch): Supported 00:14:35.508 Keep Alive (18h): Supported 00:14:35.508 I/O Commands 00:14:35.508 ------------ 00:14:35.508 Flush (00h): Supported LBA-Change 00:14:35.508 Write (01h): Supported LBA-Change 00:14:35.508 Read (02h): Supported 00:14:35.508 Compare (05h): Supported 00:14:35.508 Write Zeroes (08h): Supported LBA-Change 00:14:35.508 Dataset Management (09h): Supported LBA-Change 00:14:35.508 Copy (19h): Supported LBA-Change 00:14:35.508 Unknown (79h): Supported LBA-Change 00:14:35.508 Unknown (7Ah): Supported 00:14:35.508 00:14:35.508 Error Log 00:14:35.508 ========= 00:14:35.508 00:14:35.508 Arbitration 00:14:35.508 =========== 00:14:35.508 Arbitration Burst: 1 00:14:35.508 00:14:35.508 Power Management 00:14:35.508 ================ 00:14:35.508 Number of Power States: 1 00:14:35.508 Current Power State: Power State #0 00:14:35.508 Power State #0: 00:14:35.508 Max Power: 0.00 W 00:14:35.508 Non-Operational State: Operational 00:14:35.508 Entry Latency: Not Reported 00:14:35.508 Exit Latency: Not Reported 00:14:35.508 Relative Read Throughput: 0 00:14:35.508 Relative Read Latency: 0 00:14:35.508 Relative Write Throughput: 0 00:14:35.508 Relative Write Latency: 0 00:14:35.508 Idle Power: Not Reported 00:14:35.508 Active Power: Not Reported 00:14:35.508 Non-Operational Permissive Mode: Not Supported 00:14:35.508 00:14:35.508 Health Information 00:14:35.508 ================== 00:14:35.508 Critical Warnings: 00:14:35.508 Available Spare Space: OK 00:14:35.508 Temperature: OK 00:14:35.508 Device Reliability: OK 00:14:35.508 Read Only: No 00:14:35.508 Volatile Memory Backup: OK 00:14:35.508 Current Temperature: 0 Kelvin (-2[2024-04-26 14:55:21.155212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:35.508 [2024-04-26 14:55:21.160066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:35.508 [2024-04-26 14:55:21.160112] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:35.508 [2024-04-26 14:55:21.160131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.508 [2024-04-26 14:55:21.160142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.508 [2024-04-26 14:55:21.160153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.508 [2024-04-26 14:55:21.160163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.508 [2024-04-26 14:55:21.160229] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:35.508 [2024-04-26 14:55:21.160249] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:35.508 [2024-04-26 14:55:21.161237] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:35.508 [2024-04-26 14:55:21.161320] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:35.508 [2024-04-26 14:55:21.161359] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:35.508 [2024-04-26 14:55:21.162246] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:35.508 [2024-04-26 14:55:21.162269] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:35.508 [2024-04-26 14:55:21.162321] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:35.508 [2024-04-26 14:55:21.168045] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:35.508 73 Celsius) 00:14:35.508 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:35.508 Available Spare: 0% 00:14:35.508 Available Spare Threshold: 0% 00:14:35.508 Life Percentage Used: 0% 00:14:35.508 Data Units Read: 0 00:14:35.508 Data Units Written: 0 00:14:35.508 Host Read Commands: 0 00:14:35.508 Host Write Commands: 0 00:14:35.508 Controller Busy Time: 0 minutes 00:14:35.508 Power Cycles: 0 00:14:35.508 Power On Hours: 0 hours 00:14:35.508 Unsafe Shutdowns: 0 00:14:35.508 Unrecoverable Media Errors: 0 00:14:35.509 Lifetime Error Log Entries: 0 00:14:35.509 Warning Temperature Time: 0 minutes 00:14:35.509 Critical Temperature Time: 0 minutes 00:14:35.509 00:14:35.509 Number of Queues 00:14:35.509 ================ 00:14:35.509 Number of I/O Submission Queues: 127 00:14:35.509 Number of I/O Completion Queues: 127 00:14:35.509 00:14:35.509 Active Namespaces 00:14:35.509 ================= 00:14:35.509 Namespace ID:1 00:14:35.509 Error Recovery Timeout: Unlimited 00:14:35.509 Command Set Identifier: NVM (00h) 00:14:35.509 Deallocate: Supported 00:14:35.509 Deallocated/Unwritten Error: Not Supported 00:14:35.509 Deallocated Read Value: Unknown 00:14:35.509 Deallocate in Write Zeroes: Not Supported 00:14:35.509 Deallocated Guard Field: 0xFFFF 00:14:35.509 Flush: Supported 00:14:35.509 Reservation: Supported 00:14:35.509 Namespace Sharing Capabilities: Multiple Controllers 00:14:35.509 Size (in LBAs): 131072 (0GiB) 00:14:35.509 Capacity (in LBAs): 131072 (0GiB) 00:14:35.509 Utilization (in LBAs): 131072 (0GiB) 00:14:35.509 NGUID: 14A074BBE87143B3AC9153C84A670928 00:14:35.509 UUID: 14a074bb-e871-43b3-ac91-53c84a670928 00:14:35.509 Thin Provisioning: Not Supported 00:14:35.509 Per-NS Atomic Units: Yes 00:14:35.509 Atomic Boundary Size (Normal): 0 00:14:35.509 Atomic Boundary Size (PFail): 0 00:14:35.509 Atomic Boundary Offset: 0 00:14:35.509 Maximum Single Source Range Length: 65535 00:14:35.509 Maximum Copy Length: 65535 00:14:35.509 Maximum Source Range Count: 1 00:14:35.509 NGUID/EUI64 Never Reused: No 00:14:35.509 Namespace Write Protected: No 00:14:35.509 Number of LBA Formats: 1 00:14:35.509 Current LBA Format: LBA Format #00 00:14:35.509 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:35.509 00:14:35.509 14:55:21 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:35.509 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.766 [2024-04-26 14:55:21.399453] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:41.079 [2024-04-26 14:55:26.501366] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:41.079 Initializing NVMe Controllers 00:14:41.079 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:41.079 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:41.079 Initialization complete. Launching workers. 00:14:41.079 ======================================================== 00:14:41.079 Latency(us) 00:14:41.079 Device Information : IOPS MiB/s Average min max 00:14:41.079 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35123.57 137.20 3643.54 1167.18 7724.16 00:14:41.079 ======================================================== 00:14:41.079 Total : 35123.57 137.20 3643.54 1167.18 7724.16 00:14:41.079 00:14:41.079 14:55:26 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:41.079 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.079 [2024-04-26 14:55:26.735056] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:46.337 [2024-04-26 14:55:31.759346] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:46.337 Initializing NVMe Controllers 00:14:46.337 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:46.337 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:46.337 Initialization complete. Launching workers. 00:14:46.337 ======================================================== 00:14:46.337 Latency(us) 00:14:46.337 Device Information : IOPS MiB/s Average min max 00:14:46.337 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32233.23 125.91 3970.45 1222.35 11349.70 00:14:46.337 ======================================================== 00:14:46.337 Total : 32233.23 125.91 3970.45 1222.35 11349.70 00:14:46.337 00:14:46.337 14:55:31 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:46.337 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.337 [2024-04-26 14:55:31.956091] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:51.598 [2024-04-26 14:55:37.113184] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:51.598 Initializing NVMe Controllers 00:14:51.598 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:51.598 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:51.598 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:51.598 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:51.598 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:51.598 Initialization complete. Launching workers. 00:14:51.598 Starting thread on core 2 00:14:51.598 Starting thread on core 3 00:14:51.598 Starting thread on core 1 00:14:51.599 14:55:37 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:51.599 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.856 [2024-04-26 14:55:37.428022] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:55.138 [2024-04-26 14:55:40.491351] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:55.138 Initializing NVMe Controllers 00:14:55.138 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:55.138 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:55.138 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:55.138 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:55.138 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:55.138 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:55.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:55.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:55.138 Initialization complete. Launching workers. 00:14:55.138 Starting thread on core 1 with urgent priority queue 00:14:55.138 Starting thread on core 2 with urgent priority queue 00:14:55.138 Starting thread on core 3 with urgent priority queue 00:14:55.138 Starting thread on core 0 with urgent priority queue 00:14:55.138 SPDK bdev Controller (SPDK2 ) core 0: 4960.33 IO/s 20.16 secs/100000 ios 00:14:55.138 SPDK bdev Controller (SPDK2 ) core 1: 5884.67 IO/s 16.99 secs/100000 ios 00:14:55.138 SPDK bdev Controller (SPDK2 ) core 2: 5761.33 IO/s 17.36 secs/100000 ios 00:14:55.138 SPDK bdev Controller (SPDK2 ) core 3: 5655.67 IO/s 17.68 secs/100000 ios 00:14:55.138 ======================================================== 00:14:55.138 00:14:55.138 14:55:40 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:55.138 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.138 [2024-04-26 14:55:40.797480] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:55.138 [2024-04-26 14:55:40.806533] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:55.138 Initializing NVMe Controllers 00:14:55.138 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:55.138 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:55.138 Namespace ID: 1 size: 0GB 00:14:55.138 Initialization complete. 00:14:55.138 INFO: using host memory buffer for IO 00:14:55.138 Hello world! 00:14:55.138 14:55:40 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:55.395 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.395 [2024-04-26 14:55:41.092276] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:56.767 Initializing NVMe Controllers 00:14:56.767 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:56.767 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:56.767 Initialization complete. Launching workers. 00:14:56.767 submit (in ns) avg, min, max = 8207.7, 3485.6, 4015806.7 00:14:56.767 complete (in ns) avg, min, max = 23506.6, 2040.0, 4015075.6 00:14:56.767 00:14:56.767 Submit histogram 00:14:56.767 ================ 00:14:56.767 Range in us Cumulative Count 00:14:56.767 3.484 - 3.508: 0.2594% ( 36) 00:14:56.767 3.508 - 3.532: 1.0303% ( 107) 00:14:56.767 3.532 - 3.556: 3.1196% ( 290) 00:14:56.767 3.556 - 3.579: 7.5072% ( 609) 00:14:56.767 3.579 - 3.603: 15.3458% ( 1088) 00:14:56.767 3.603 - 3.627: 25.1729% ( 1364) 00:14:56.767 3.627 - 3.650: 37.4784% ( 1708) 00:14:56.767 3.650 - 3.674: 45.9366% ( 1174) 00:14:56.767 3.674 - 3.698: 53.9553% ( 1113) 00:14:56.767 3.698 - 3.721: 59.0562% ( 708) 00:14:56.767 3.721 - 3.745: 64.4741% ( 752) 00:14:56.767 3.745 - 3.769: 68.1700% ( 513) 00:14:56.767 3.769 - 3.793: 71.7291% ( 494) 00:14:56.767 3.793 - 3.816: 74.6614% ( 407) 00:14:56.767 3.816 - 3.840: 77.3199% ( 369) 00:14:56.767 3.840 - 3.864: 81.1239% ( 528) 00:14:56.767 3.864 - 3.887: 84.2651% ( 436) 00:14:56.767 3.887 - 3.911: 86.9380% ( 371) 00:14:56.767 3.911 - 3.935: 88.8617% ( 267) 00:14:56.767 3.935 - 3.959: 90.5692% ( 237) 00:14:56.767 3.959 - 3.982: 91.9524% ( 192) 00:14:56.767 3.982 - 4.006: 93.2853% ( 185) 00:14:56.767 4.006 - 4.030: 94.1138% ( 115) 00:14:56.767 4.030 - 4.053: 94.8487% ( 102) 00:14:56.767 4.053 - 4.077: 95.3242% ( 66) 00:14:56.767 4.077 - 4.101: 95.9222% ( 83) 00:14:56.767 4.101 - 4.124: 96.3617% ( 61) 00:14:56.767 4.124 - 4.148: 96.6715% ( 43) 00:14:56.767 4.148 - 4.172: 96.8588% ( 26) 00:14:56.767 4.172 - 4.196: 97.0821% ( 31) 00:14:56.767 4.196 - 4.219: 97.1686% ( 12) 00:14:56.767 4.219 - 4.243: 97.2334% ( 9) 00:14:56.767 4.243 - 4.267: 97.2983% ( 9) 00:14:56.767 4.267 - 4.290: 97.3703% ( 10) 00:14:56.767 4.290 - 4.314: 97.4424% ( 10) 00:14:56.767 4.314 - 4.338: 97.4928% ( 7) 00:14:56.767 4.338 - 4.361: 97.5865% ( 13) 00:14:56.767 4.361 - 4.385: 97.6369% ( 7) 00:14:56.767 4.385 - 4.409: 97.6945% ( 8) 00:14:56.767 4.409 - 4.433: 97.7161% ( 3) 00:14:56.767 4.433 - 4.456: 97.7378% ( 3) 00:14:56.767 4.456 - 4.480: 97.7450% ( 1) 00:14:56.767 4.527 - 4.551: 97.7594% ( 2) 00:14:56.767 4.551 - 4.575: 97.7738% ( 2) 00:14:56.767 4.670 - 4.693: 97.7810% ( 1) 00:14:56.767 4.693 - 4.717: 97.8026% ( 3) 00:14:56.767 4.717 - 4.741: 97.8098% ( 1) 00:14:56.767 4.741 - 4.764: 97.8242% ( 2) 00:14:56.767 4.764 - 4.788: 97.8746% ( 7) 00:14:56.767 4.788 - 4.812: 97.9251% ( 7) 00:14:56.767 4.812 - 4.836: 97.9683% ( 6) 00:14:56.767 4.836 - 4.859: 97.9971% ( 4) 00:14:56.767 4.859 - 4.883: 98.0836% ( 12) 00:14:56.767 4.883 - 4.907: 98.1340% ( 7) 00:14:56.767 4.907 - 4.930: 98.1772% ( 6) 00:14:56.767 4.930 - 4.954: 98.1916% ( 2) 00:14:56.767 4.954 - 4.978: 98.2421% ( 7) 00:14:56.767 4.978 - 5.001: 98.2853% ( 6) 00:14:56.767 5.001 - 5.025: 98.3069% ( 3) 00:14:56.767 5.025 - 5.049: 98.3646% ( 8) 00:14:56.767 5.049 - 5.073: 98.3862% ( 3) 00:14:56.767 5.073 - 5.096: 98.4150% ( 4) 00:14:56.767 5.096 - 5.120: 98.4294% ( 2) 00:14:56.767 5.120 - 5.144: 98.4582% ( 4) 00:14:56.767 5.144 - 5.167: 98.4654% ( 1) 00:14:56.767 5.167 - 5.191: 98.4726% ( 1) 00:14:56.767 5.191 - 5.215: 98.4798% ( 1) 00:14:56.767 5.215 - 5.239: 98.4942% ( 2) 00:14:56.767 5.239 - 5.262: 98.5086% ( 2) 00:14:56.767 5.286 - 5.310: 98.5159% ( 1) 00:14:56.767 5.333 - 5.357: 98.5231% ( 1) 00:14:56.767 5.547 - 5.570: 98.5303% ( 1) 00:14:56.767 5.594 - 5.618: 98.5375% ( 1) 00:14:56.767 5.641 - 5.665: 98.5519% ( 2) 00:14:56.767 5.665 - 5.689: 98.5591% ( 1) 00:14:56.767 6.021 - 6.044: 98.5663% ( 1) 00:14:56.767 6.447 - 6.495: 98.5735% ( 1) 00:14:56.767 6.542 - 6.590: 98.5807% ( 1) 00:14:56.767 6.590 - 6.637: 98.5951% ( 2) 00:14:56.767 6.637 - 6.684: 98.6023% ( 1) 00:14:56.767 6.684 - 6.732: 98.6095% ( 1) 00:14:56.767 6.827 - 6.874: 98.6239% ( 2) 00:14:56.767 6.874 - 6.921: 98.6311% ( 1) 00:14:56.767 6.969 - 7.016: 98.6383% ( 1) 00:14:56.767 7.016 - 7.064: 98.6527% ( 2) 00:14:56.767 7.159 - 7.206: 98.6671% ( 2) 00:14:56.767 7.206 - 7.253: 98.6816% ( 2) 00:14:56.767 7.301 - 7.348: 98.6888% ( 1) 00:14:56.767 7.348 - 7.396: 98.6960% ( 1) 00:14:56.767 7.443 - 7.490: 98.7032% ( 1) 00:14:56.767 7.490 - 7.538: 98.7176% ( 2) 00:14:56.767 7.538 - 7.585: 98.7320% ( 2) 00:14:56.767 7.680 - 7.727: 98.7464% ( 2) 00:14:56.767 7.727 - 7.775: 98.7536% ( 1) 00:14:56.767 7.822 - 7.870: 98.7608% ( 1) 00:14:56.767 7.964 - 8.012: 98.7680% ( 1) 00:14:56.767 8.012 - 8.059: 98.7824% ( 2) 00:14:56.767 8.059 - 8.107: 98.7968% ( 2) 00:14:56.767 8.107 - 8.154: 98.8040% ( 1) 00:14:56.767 8.201 - 8.249: 98.8112% ( 1) 00:14:56.767 8.391 - 8.439: 98.8184% ( 1) 00:14:56.767 8.486 - 8.533: 98.8256% ( 1) 00:14:56.767 8.533 - 8.581: 98.8329% ( 1) 00:14:56.767 8.581 - 8.628: 98.8473% ( 2) 00:14:56.767 8.628 - 8.676: 98.8545% ( 1) 00:14:56.767 8.770 - 8.818: 98.8617% ( 1) 00:14:56.768 8.865 - 8.913: 98.8761% ( 2) 00:14:56.768 9.292 - 9.339: 98.8833% ( 1) 00:14:56.768 9.434 - 9.481: 98.8905% ( 1) 00:14:56.768 9.576 - 9.624: 98.8977% ( 1) 00:14:56.768 10.050 - 10.098: 98.9049% ( 1) 00:14:56.768 10.667 - 10.714: 98.9121% ( 1) 00:14:56.768 10.904 - 10.951: 98.9265% ( 2) 00:14:56.768 11.378 - 11.425: 98.9337% ( 1) 00:14:56.768 11.520 - 11.567: 98.9409% ( 1) 00:14:56.768 11.804 - 11.852: 98.9481% ( 1) 00:14:56.768 11.947 - 11.994: 98.9553% ( 1) 00:14:56.768 11.994 - 12.041: 98.9625% ( 1) 00:14:56.768 12.705 - 12.800: 98.9697% ( 1) 00:14:56.768 12.990 - 13.084: 98.9841% ( 2) 00:14:56.768 13.464 - 13.559: 98.9986% ( 2) 00:14:56.768 13.653 - 13.748: 99.0058% ( 1) 00:14:56.768 13.748 - 13.843: 99.0130% ( 1) 00:14:56.768 13.843 - 13.938: 99.0202% ( 1) 00:14:56.768 14.222 - 14.317: 99.0274% ( 1) 00:14:56.768 14.507 - 14.601: 99.0346% ( 1) 00:14:56.768 14.601 - 14.696: 99.0418% ( 1) 00:14:56.768 14.886 - 14.981: 99.0490% ( 1) 00:14:56.768 14.981 - 15.076: 99.0634% ( 2) 00:14:56.768 16.972 - 17.067: 99.0706% ( 1) 00:14:56.768 17.161 - 17.256: 99.0850% ( 2) 00:14:56.768 17.351 - 17.446: 99.1427% ( 8) 00:14:56.768 17.446 - 17.541: 99.2003% ( 8) 00:14:56.768 17.541 - 17.636: 99.2435% ( 6) 00:14:56.768 17.636 - 17.730: 99.3012% ( 8) 00:14:56.768 17.730 - 17.825: 99.3444% ( 6) 00:14:56.768 17.825 - 17.920: 99.4092% ( 9) 00:14:56.768 17.920 - 18.015: 99.4957% ( 12) 00:14:56.768 18.015 - 18.110: 99.5533% ( 8) 00:14:56.768 18.110 - 18.204: 99.5605% ( 1) 00:14:56.768 18.204 - 18.299: 99.5677% ( 1) 00:14:56.768 18.299 - 18.394: 99.6110% ( 6) 00:14:56.768 18.394 - 18.489: 99.6758% ( 9) 00:14:56.768 18.489 - 18.584: 99.7550% ( 11) 00:14:56.768 18.584 - 18.679: 99.7695% ( 2) 00:14:56.768 18.679 - 18.773: 99.7911% ( 3) 00:14:56.768 18.773 - 18.868: 99.8199% ( 4) 00:14:56.768 18.868 - 18.963: 99.8343% ( 2) 00:14:56.768 18.963 - 19.058: 99.8415% ( 1) 00:14:56.768 19.058 - 19.153: 99.8487% ( 1) 00:14:56.768 19.153 - 19.247: 99.8559% ( 1) 00:14:56.768 19.721 - 19.816: 99.8703% ( 2) 00:14:56.768 20.006 - 20.101: 99.8775% ( 1) 00:14:56.768 21.713 - 21.807: 99.8847% ( 1) 00:14:56.768 27.686 - 27.876: 99.8919% ( 1) 00:14:56.768 3980.705 - 4004.978: 99.9712% ( 11) 00:14:56.768 4004.978 - 4029.250: 100.0000% ( 4) 00:14:56.768 00:14:56.768 Complete histogram 00:14:56.768 ================== 00:14:56.768 Range in us Cumulative Count 00:14:56.768 2.039 - 2.050: 4.3444% ( 603) 00:14:56.768 2.050 - 2.062: 11.1383% ( 943) 00:14:56.768 2.062 - 2.074: 13.0403% ( 264) 00:14:56.768 2.074 - 2.086: 43.2205% ( 4189) 00:14:56.768 2.086 - 2.098: 60.0504% ( 2336) 00:14:56.768 2.098 - 2.110: 63.2709% ( 447) 00:14:56.768 2.110 - 2.121: 68.5303% ( 730) 00:14:56.768 2.121 - 2.133: 69.9712% ( 200) 00:14:56.768 2.133 - 2.145: 72.3631% ( 332) 00:14:56.768 2.145 - 2.157: 84.0706% ( 1625) 00:14:56.768 2.157 - 2.169: 88.8401% ( 662) 00:14:56.768 2.169 - 2.181: 89.9568% ( 155) 00:14:56.768 2.181 - 2.193: 91.2320% ( 177) 00:14:56.768 2.193 - 2.204: 92.0965% ( 120) 00:14:56.768 2.204 - 2.216: 92.6081% ( 71) 00:14:56.768 2.216 - 2.228: 93.9697% ( 189) 00:14:56.768 2.228 - 2.240: 95.0576% ( 151) 00:14:56.768 2.240 - 2.252: 95.4539% ( 55) 00:14:56.768 2.252 - 2.264: 95.6916% ( 33) 00:14:56.768 2.264 - 2.276: 95.7853% ( 13) 00:14:56.768 2.276 - 2.287: 95.8501% ( 9) 00:14:56.768 2.287 - 2.299: 96.0014% ( 21) 00:14:56.768 2.299 - 2.311: 96.2176% ( 30) 00:14:56.768 2.311 - 2.323: 96.3689% ( 21) 00:14:56.768 2.323 - 2.335: 96.5346% ( 23) 00:14:56.768 2.335 - 2.347: 96.6787% ( 20) 00:14:56.768 2.347 - 2.359: 97.0173% ( 47) 00:14:56.768 2.359 - 2.370: 97.3919% ( 52) 00:14:56.768 2.370 - 2.382: 97.6369% ( 34) 00:14:56.768 2.382 - 2.394: 97.9251% ( 40) 00:14:56.768 2.394 - 2.406: 98.2205% ( 41) 00:14:56.768 2.406 - 2.418: 98.3213% ( 14) 00:14:56.768 2.418 - 2.430: 98.3862% ( 9) 00:14:56.768 2.430 - 2.441: 98.4510% ( 9) 00:14:56.768 2.441 - 2.453: 98.4726% ( 3) 00:14:56.768 2.453 - 2.465: 98.5159% ( 6) 00:14:56.768 2.465 - 2.477: 98.5447% ( 4) 00:14:56.768 2.477 - 2.489: 98.5591% ( 2) 00:14:56.768 2.489 - 2.501: 98.5879% ( 4) 00:14:56.768 2.501 - 2.513: 98.6023% ( 2) 00:14:56.768 2.513 - 2.524: 98.6311% ( 4) 00:14:56.768 2.536 - 2.548: 98.6527% ( 3) 00:14:56.768 2.572 - 2.584: 98.6671% ( 2) 00:14:56.768 2.584 - 2.596: 98.6744% ( 1) 00:14:56.768 2.607 - 2.619: 98.6888% ( 2) 00:14:56.768 2.655 - 2.667: 98.6960% ( 1) 00:14:56.768 2.667 - 2.679: 98.7032% ( 1) 00:14:56.768 2.690 - 2.702: 98.7104% ( 1) 00:14:56.768 2.726 - 2.738: 98.7176% ( 1) 00:14:56.768 3.390 - 3.413: 98.7248% ( 1) 00:14:56.768 3.413 - 3.437: 98.7392% ( 2) 00:14:56.768 3.437 - 3.461: 98.7464% ( 1) 00:14:56.768 3.461 - 3.484: 98.7608% ( 2) 00:14:56.768 3.484 - 3.508: 98.7680% ( 1) 00:14:56.768 3.508 - 3.532: 98.7896% ( 3) 00:14:56.768 3.532 - 3.556: 98.7968% ( 1) 00:14:56.768 3.579 - 3.603: 98.8112% ( 2) 00:14:56.768 3.603 - 3.627: 98.8184% ( 1) 00:14:56.768 3.627 - 3.650: 98.8256% ( 1) 00:14:56.768 3.650 - 3.674: 98.8401% ( 2) 00:14:56.768 3.674 - 3.698: 98.8473% ( 1) 00:14:56.768 3.721 - 3.745: 98.8545% ( 1) 00:14:56.768 3.745 - 3.769: 98.8617% ( 1) 00:14:56.768 3.769 - 3.793: 98.8689% ( 1) 00:14:56.768 3.793 - 3.816: 98.8761% ( 1) 00:14:56.768 3.816 - 3.840: 98.8977% ( 3) 00:14:56.768 3.840 - 3.864: 98.9049% ( 1) 00:14:56.768 3.911 - 3.935: 98.9121% ( 1) 00:14:56.768 3.982 - 4.006: 98.9193% ( 1) 00:14:56.768 4.101 - 4.124: 98.9265% ( 1) 00:14:56.768 4.290 - 4.314: 98.9337% ( 1) 00:14:56.768 4.480 - 4.504: 98.9409% ( 1) 00:14:56.768 5.286 - 5.310: 98.9481% ( 1) 00:14:56.768 5.428 - 5.452: 98.9553% ( 1) 00:14:56.768 5.926 - 5.950: 98.9625% ( 1) 00:14:56.768 5.973 - 5.997: 98.9697% ( 1) 00:14:56.768 6.447 - 6.495: 98.9769% ( 1) 00:14:56.768 6.590 - 6.637: 98.9841% ( 1) 00:14:56.768 6.732 - 6.779: 98.9986% ( 2) 00:14:56.768 6.969 - 7.016: 99.0058% ( 1) 00:14:56.768 7.633 - 7.680: 99.0130% ( 1) 00:14:56.768 9.197 - 9.244: 99.0202% ( 1) 00:14:56.768 10.572 - 10.619: 99.0274% ( 1) 00:14:56.768 15.360 - 15.455: 99.0346% ( 1) 00:14:56.768 15.455 - 15.550: 99.0490% ( 2) 00:14:56.768 15.644 - 15.739: 99.0778% ( 4) 00:14:56.768 15.739 - 15.834: 99.0922% ( 2) 00:14:56.768 15.834 - 15.929: 99.1354% ( 6) 00:14:56.768 15.929 - 16.024: 99.1571% ( 3) 00:14:56.768 16.024 - 16.119: 99.1931% ( 5) 00:14:56.768 16.119 - 16.213: 99.2075% ( 2) 00:14:56.768 16.213 - 16.308: 99.2651% ( 8) 00:14:56.768 16.308 - 16.403: 99.2795% ( 2) 00:14:56.768 16.403 - 16.498: 99.2939% ( 2) 00:14:56.768 16.593 - 16.687: 9[2024-04-26 14:55:42.186726] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:56.768 9.3156% ( 3) 00:14:56.768 16.687 - 16.782: 99.3300% ( 2) 00:14:56.768 16.782 - 16.877: 99.3516% ( 3) 00:14:56.768 16.877 - 16.972: 99.3660% ( 2) 00:14:56.768 16.972 - 17.067: 99.3804% ( 2) 00:14:56.768 17.067 - 17.161: 99.3948% ( 2) 00:14:56.768 17.161 - 17.256: 99.4020% ( 1) 00:14:56.768 17.256 - 17.351: 99.4308% ( 4) 00:14:56.768 17.825 - 17.920: 99.4380% ( 1) 00:14:56.768 18.015 - 18.110: 99.4452% ( 1) 00:14:56.768 18.110 - 18.204: 99.4524% ( 1) 00:14:56.768 18.204 - 18.299: 99.4597% ( 1) 00:14:56.768 20.670 - 20.764: 99.4669% ( 1) 00:14:56.768 3980.705 - 4004.978: 99.9352% ( 65) 00:14:56.768 4004.978 - 4029.250: 100.0000% ( 9) 00:14:56.768 00:14:56.768 14:55:42 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:56.768 14:55:42 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:56.768 14:55:42 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:56.768 14:55:42 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:56.768 14:55:42 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:56.768 [ 00:14:56.768 { 00:14:56.768 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:56.768 "subtype": "Discovery", 00:14:56.768 "listen_addresses": [], 00:14:56.768 "allow_any_host": true, 00:14:56.768 "hosts": [] 00:14:56.768 }, 00:14:56.768 { 00:14:56.768 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:56.768 "subtype": "NVMe", 00:14:56.768 "listen_addresses": [ 00:14:56.768 { 00:14:56.768 "transport": "VFIOUSER", 00:14:56.768 "trtype": "VFIOUSER", 00:14:56.768 "adrfam": "IPv4", 00:14:56.768 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:56.768 "trsvcid": "0" 00:14:56.768 } 00:14:56.768 ], 00:14:56.768 "allow_any_host": true, 00:14:56.768 "hosts": [], 00:14:56.768 "serial_number": "SPDK1", 00:14:56.769 "model_number": "SPDK bdev Controller", 00:14:56.769 "max_namespaces": 32, 00:14:56.769 "min_cntlid": 1, 00:14:56.769 "max_cntlid": 65519, 00:14:56.769 "namespaces": [ 00:14:56.769 { 00:14:56.769 "nsid": 1, 00:14:56.769 "bdev_name": "Malloc1", 00:14:56.769 "name": "Malloc1", 00:14:56.769 "nguid": "6BD5E2987A1F483A9587E7655E556AE6", 00:14:56.769 "uuid": "6bd5e298-7a1f-483a-9587-e7655e556ae6" 00:14:56.769 }, 00:14:56.769 { 00:14:56.769 "nsid": 2, 00:14:56.769 "bdev_name": "Malloc3", 00:14:56.769 "name": "Malloc3", 00:14:56.769 "nguid": "620F4C42C4BD4A089AD0B8EB7EB1E3C9", 00:14:56.769 "uuid": "620f4c42-c4bd-4a08-9ad0-b8eb7eb1e3c9" 00:14:56.769 } 00:14:56.769 ] 00:14:56.769 }, 00:14:56.769 { 00:14:56.769 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:56.769 "subtype": "NVMe", 00:14:56.769 "listen_addresses": [ 00:14:56.769 { 00:14:56.769 "transport": "VFIOUSER", 00:14:56.769 "trtype": "VFIOUSER", 00:14:56.769 "adrfam": "IPv4", 00:14:56.769 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:56.769 "trsvcid": "0" 00:14:56.769 } 00:14:56.769 ], 00:14:56.769 "allow_any_host": true, 00:14:56.769 "hosts": [], 00:14:56.769 "serial_number": "SPDK2", 00:14:56.769 "model_number": "SPDK bdev Controller", 00:14:56.769 "max_namespaces": 32, 00:14:56.769 "min_cntlid": 1, 00:14:56.769 "max_cntlid": 65519, 00:14:56.769 "namespaces": [ 00:14:56.769 { 00:14:56.769 "nsid": 1, 00:14:56.769 "bdev_name": "Malloc2", 00:14:56.769 "name": "Malloc2", 00:14:56.769 "nguid": "14A074BBE87143B3AC9153C84A670928", 00:14:56.769 "uuid": "14a074bb-e871-43b3-ac91-53c84a670928" 00:14:56.769 } 00:14:56.769 ] 00:14:56.769 } 00:14:56.769 ] 00:14:56.769 14:55:42 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:56.769 14:55:42 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3747030 00:14:56.769 14:55:42 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:56.769 14:55:42 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:56.769 14:55:42 -- common/autotest_common.sh@1251 -- # local i=0 00:14:56.769 14:55:42 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:56.769 14:55:42 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:56.769 14:55:42 -- common/autotest_common.sh@1262 -- # return 0 00:14:56.769 14:55:42 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:56.769 14:55:42 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:57.027 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.027 [2024-04-26 14:55:42.642496] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:57.027 Malloc4 00:14:57.027 14:55:42 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:57.284 [2024-04-26 14:55:42.978948] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:57.284 14:55:42 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:57.542 Asynchronous Event Request test 00:14:57.542 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:57.542 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:57.542 Registering asynchronous event callbacks... 00:14:57.542 Starting namespace attribute notice tests for all controllers... 00:14:57.542 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:57.542 aer_cb - Changed Namespace 00:14:57.542 Cleaning up... 00:14:57.542 [ 00:14:57.542 { 00:14:57.542 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:57.542 "subtype": "Discovery", 00:14:57.542 "listen_addresses": [], 00:14:57.542 "allow_any_host": true, 00:14:57.542 "hosts": [] 00:14:57.542 }, 00:14:57.542 { 00:14:57.542 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:57.542 "subtype": "NVMe", 00:14:57.542 "listen_addresses": [ 00:14:57.542 { 00:14:57.542 "transport": "VFIOUSER", 00:14:57.542 "trtype": "VFIOUSER", 00:14:57.542 "adrfam": "IPv4", 00:14:57.542 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:57.542 "trsvcid": "0" 00:14:57.542 } 00:14:57.542 ], 00:14:57.542 "allow_any_host": true, 00:14:57.542 "hosts": [], 00:14:57.542 "serial_number": "SPDK1", 00:14:57.542 "model_number": "SPDK bdev Controller", 00:14:57.542 "max_namespaces": 32, 00:14:57.542 "min_cntlid": 1, 00:14:57.542 "max_cntlid": 65519, 00:14:57.542 "namespaces": [ 00:14:57.542 { 00:14:57.542 "nsid": 1, 00:14:57.542 "bdev_name": "Malloc1", 00:14:57.542 "name": "Malloc1", 00:14:57.542 "nguid": "6BD5E2987A1F483A9587E7655E556AE6", 00:14:57.542 "uuid": "6bd5e298-7a1f-483a-9587-e7655e556ae6" 00:14:57.542 }, 00:14:57.542 { 00:14:57.542 "nsid": 2, 00:14:57.542 "bdev_name": "Malloc3", 00:14:57.542 "name": "Malloc3", 00:14:57.542 "nguid": "620F4C42C4BD4A089AD0B8EB7EB1E3C9", 00:14:57.542 "uuid": "620f4c42-c4bd-4a08-9ad0-b8eb7eb1e3c9" 00:14:57.542 } 00:14:57.542 ] 00:14:57.542 }, 00:14:57.542 { 00:14:57.542 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:57.542 "subtype": "NVMe", 00:14:57.542 "listen_addresses": [ 00:14:57.542 { 00:14:57.542 "transport": "VFIOUSER", 00:14:57.542 "trtype": "VFIOUSER", 00:14:57.542 "adrfam": "IPv4", 00:14:57.542 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:57.542 "trsvcid": "0" 00:14:57.542 } 00:14:57.542 ], 00:14:57.542 "allow_any_host": true, 00:14:57.542 "hosts": [], 00:14:57.542 "serial_number": "SPDK2", 00:14:57.542 "model_number": "SPDK bdev Controller", 00:14:57.542 "max_namespaces": 32, 00:14:57.542 "min_cntlid": 1, 00:14:57.542 "max_cntlid": 65519, 00:14:57.542 "namespaces": [ 00:14:57.542 { 00:14:57.542 "nsid": 1, 00:14:57.542 "bdev_name": "Malloc2", 00:14:57.542 "name": "Malloc2", 00:14:57.542 "nguid": "14A074BBE87143B3AC9153C84A670928", 00:14:57.542 "uuid": "14a074bb-e871-43b3-ac91-53c84a670928" 00:14:57.542 }, 00:14:57.542 { 00:14:57.542 "nsid": 2, 00:14:57.542 "bdev_name": "Malloc4", 00:14:57.542 "name": "Malloc4", 00:14:57.542 "nguid": "935A9DE393B947E19F9997C0525CDD19", 00:14:57.542 "uuid": "935a9de3-93b9-47e1-9f99-97c0525cdd19" 00:14:57.542 } 00:14:57.542 ] 00:14:57.542 } 00:14:57.542 ] 00:14:57.542 14:55:43 -- target/nvmf_vfio_user.sh@44 -- # wait 3747030 00:14:57.542 14:55:43 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:57.542 14:55:43 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3741435 00:14:57.542 14:55:43 -- common/autotest_common.sh@936 -- # '[' -z 3741435 ']' 00:14:57.542 14:55:43 -- common/autotest_common.sh@940 -- # kill -0 3741435 00:14:57.542 14:55:43 -- common/autotest_common.sh@941 -- # uname 00:14:57.542 14:55:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:57.542 14:55:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3741435 00:14:57.542 14:55:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:57.542 14:55:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:57.542 14:55:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3741435' 00:14:57.542 killing process with pid 3741435 00:14:57.542 14:55:43 -- common/autotest_common.sh@955 -- # kill 3741435 00:14:57.542 [2024-04-26 14:55:43.266251] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:14:57.542 14:55:43 -- common/autotest_common.sh@960 -- # wait 3741435 00:14:58.108 14:55:43 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:58.108 14:55:43 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:58.108 14:55:43 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:58.108 14:55:43 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:58.108 14:55:43 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:58.108 14:55:43 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3747163 00:14:58.108 14:55:43 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:58.108 14:55:43 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3747163' 00:14:58.108 Process pid: 3747163 00:14:58.108 14:55:43 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:58.108 14:55:43 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3747163 00:14:58.108 14:55:43 -- common/autotest_common.sh@817 -- # '[' -z 3747163 ']' 00:14:58.108 14:55:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.108 14:55:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:58.108 14:55:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.108 14:55:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:58.108 14:55:43 -- common/autotest_common.sh@10 -- # set +x 00:14:58.108 [2024-04-26 14:55:43.649078] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:58.108 [2024-04-26 14:55:43.650171] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:14:58.108 [2024-04-26 14:55:43.650226] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.108 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.108 [2024-04-26 14:55:43.684203] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:58.108 [2024-04-26 14:55:43.716644] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.108 [2024-04-26 14:55:43.805483] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.108 [2024-04-26 14:55:43.805553] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.108 [2024-04-26 14:55:43.805582] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.108 [2024-04-26 14:55:43.805596] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.108 [2024-04-26 14:55:43.805608] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.108 [2024-04-26 14:55:43.805696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.108 [2024-04-26 14:55:43.805750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.108 [2024-04-26 14:55:43.805865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.108 [2024-04-26 14:55:43.805867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.366 [2024-04-26 14:55:43.903818] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:14:58.366 [2024-04-26 14:55:43.904080] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:14:58.366 [2024-04-26 14:55:43.904326] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:14:58.366 [2024-04-26 14:55:43.905033] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:58.366 [2024-04-26 14:55:43.905150] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:14:58.366 14:55:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:58.366 14:55:43 -- common/autotest_common.sh@850 -- # return 0 00:14:58.366 14:55:43 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:59.298 14:55:44 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:59.555 14:55:45 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:59.555 14:55:45 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:59.555 14:55:45 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.555 14:55:45 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:59.555 14:55:45 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:59.812 Malloc1 00:14:59.812 14:55:45 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:00.070 14:55:45 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:00.328 14:55:45 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:00.585 14:55:46 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:00.585 14:55:46 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:00.585 14:55:46 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:00.842 Malloc2 00:15:00.843 14:55:46 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:01.118 14:55:46 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:01.383 14:55:46 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:01.641 14:55:47 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:01.641 14:55:47 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3747163 00:15:01.641 14:55:47 -- common/autotest_common.sh@936 -- # '[' -z 3747163 ']' 00:15:01.641 14:55:47 -- common/autotest_common.sh@940 -- # kill -0 3747163 00:15:01.641 14:55:47 -- common/autotest_common.sh@941 -- # uname 00:15:01.641 14:55:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:01.641 14:55:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3747163 00:15:01.641 14:55:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:01.641 14:55:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:01.641 14:55:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3747163' 00:15:01.641 killing process with pid 3747163 00:15:01.641 14:55:47 -- common/autotest_common.sh@955 -- # kill 3747163 00:15:01.641 14:55:47 -- common/autotest_common.sh@960 -- # wait 3747163 00:15:01.900 14:55:47 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:01.900 14:55:47 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:01.900 00:15:01.900 real 0m52.413s 00:15:01.900 user 3m27.142s 00:15:01.900 sys 0m4.307s 00:15:01.900 14:55:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:01.900 14:55:47 -- common/autotest_common.sh@10 -- # set +x 00:15:01.900 ************************************ 00:15:01.900 END TEST nvmf_vfio_user 00:15:01.900 ************************************ 00:15:01.900 14:55:47 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:01.900 14:55:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:01.900 14:55:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:01.900 14:55:47 -- common/autotest_common.sh@10 -- # set +x 00:15:01.900 ************************************ 00:15:01.900 START TEST nvmf_vfio_user_nvme_compliance 00:15:01.900 ************************************ 00:15:01.900 14:55:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:02.159 * Looking for test storage... 00:15:02.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:02.159 14:55:47 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.159 14:55:47 -- nvmf/common.sh@7 -- # uname -s 00:15:02.159 14:55:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.159 14:55:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.159 14:55:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.159 14:55:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.159 14:55:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.159 14:55:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.159 14:55:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.159 14:55:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.159 14:55:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.159 14:55:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.159 14:55:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:02.159 14:55:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:02.159 14:55:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.159 14:55:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.159 14:55:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.159 14:55:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.159 14:55:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:02.159 14:55:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.159 14:55:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.159 14:55:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.159 14:55:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.159 14:55:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.159 14:55:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.159 14:55:47 -- paths/export.sh@5 -- # export PATH 00:15:02.159 14:55:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.159 14:55:47 -- nvmf/common.sh@47 -- # : 0 00:15:02.159 14:55:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:02.159 14:55:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:02.159 14:55:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.159 14:55:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.159 14:55:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.159 14:55:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:02.159 14:55:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:02.159 14:55:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:02.159 14:55:47 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:02.159 14:55:47 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:02.159 14:55:47 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:02.159 14:55:47 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:02.159 14:55:47 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:02.159 14:55:47 -- compliance/compliance.sh@20 -- # nvmfpid=3747654 00:15:02.159 14:55:47 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:02.159 14:55:47 -- compliance/compliance.sh@21 -- # echo 'Process pid: 3747654' 00:15:02.159 Process pid: 3747654 00:15:02.159 14:55:47 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:02.159 14:55:47 -- compliance/compliance.sh@24 -- # waitforlisten 3747654 00:15:02.159 14:55:47 -- common/autotest_common.sh@817 -- # '[' -z 3747654 ']' 00:15:02.159 14:55:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.159 14:55:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:02.159 14:55:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.159 14:55:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:02.159 14:55:47 -- common/autotest_common.sh@10 -- # set +x 00:15:02.159 [2024-04-26 14:55:47.749906] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:15:02.159 [2024-04-26 14:55:47.750011] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.159 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.159 [2024-04-26 14:55:47.782619] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:02.159 [2024-04-26 14:55:47.809747] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:02.159 [2024-04-26 14:55:47.894144] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.159 [2024-04-26 14:55:47.894205] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.159 [2024-04-26 14:55:47.894226] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.159 [2024-04-26 14:55:47.894237] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.159 [2024-04-26 14:55:47.894248] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.159 [2024-04-26 14:55:47.894331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.159 [2024-04-26 14:55:47.894397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.159 [2024-04-26 14:55:47.894399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.417 14:55:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:02.417 14:55:48 -- common/autotest_common.sh@850 -- # return 0 00:15:02.417 14:55:48 -- compliance/compliance.sh@26 -- # sleep 1 00:15:03.349 14:55:49 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:03.349 14:55:49 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:03.349 14:55:49 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:03.349 14:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.349 14:55:49 -- common/autotest_common.sh@10 -- # set +x 00:15:03.349 14:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.349 14:55:49 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:03.349 14:55:49 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:03.349 14:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.349 14:55:49 -- common/autotest_common.sh@10 -- # set +x 00:15:03.349 malloc0 00:15:03.349 14:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.349 14:55:49 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:03.349 14:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.349 14:55:49 -- common/autotest_common.sh@10 -- # set +x 00:15:03.349 14:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.349 14:55:49 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:03.349 14:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.349 14:55:49 -- common/autotest_common.sh@10 -- # set +x 00:15:03.349 14:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.349 14:55:49 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:03.349 14:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.349 14:55:49 -- common/autotest_common.sh@10 -- # set +x 00:15:03.349 14:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.349 14:55:49 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:03.610 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.610 00:15:03.610 00:15:03.610 CUnit - A unit testing framework for C - Version 2.1-3 00:15:03.610 http://cunit.sourceforge.net/ 00:15:03.610 00:15:03.610 00:15:03.610 Suite: nvme_compliance 00:15:03.610 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-26 14:55:49.247484] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.610 [2024-04-26 14:55:49.248903] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:03.610 [2024-04-26 14:55:49.248928] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:03.610 [2024-04-26 14:55:49.248940] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:03.610 [2024-04-26 14:55:49.250502] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.610 passed 00:15:03.610 Test: admin_identify_ctrlr_verify_fused ...[2024-04-26 14:55:49.337106] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.610 [2024-04-26 14:55:49.340117] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.867 passed 00:15:03.867 Test: admin_identify_ns ...[2024-04-26 14:55:49.426608] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.867 [2024-04-26 14:55:49.490035] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:03.867 [2024-04-26 14:55:49.498038] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:03.867 [2024-04-26 14:55:49.519141] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.867 passed 00:15:03.867 Test: admin_get_features_mandatory_features ...[2024-04-26 14:55:49.602447] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.867 [2024-04-26 14:55:49.607480] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:04.124 passed 00:15:04.124 Test: admin_get_features_optional_features ...[2024-04-26 14:55:49.693034] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:04.124 [2024-04-26 14:55:49.696058] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:04.124 passed 00:15:04.124 Test: admin_set_features_number_of_queues ...[2024-04-26 14:55:49.783204] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:04.382 [2024-04-26 14:55:49.888136] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:04.382 passed 00:15:04.382 Test: admin_get_log_page_mandatory_logs ...[2024-04-26 14:55:49.971868] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:04.382 [2024-04-26 14:55:49.974892] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:04.382 passed 00:15:04.382 Test: admin_get_log_page_with_lpo ...[2024-04-26 14:55:50.060992] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:04.639 [2024-04-26 14:55:50.133051] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:04.639 [2024-04-26 14:55:50.146112] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:04.639 passed 00:15:04.639 Test: fabric_property_get ...[2024-04-26 14:55:50.235237] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:04.639 [2024-04-26 14:55:50.236512] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:04.639 [2024-04-26 14:55:50.238262] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:04.639 passed 00:15:04.639 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-26 14:55:50.328881] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:04.639 [2024-04-26 14:55:50.330160] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:04.639 [2024-04-26 14:55:50.331899] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:04.639 passed 00:15:04.896 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-26 14:55:50.415148] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:04.896 [2024-04-26 14:55:50.499030] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:04.896 [2024-04-26 14:55:50.515028] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:04.896 [2024-04-26 14:55:50.520267] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:04.896 passed 00:15:04.896 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-26 14:55:50.605943] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:04.896 [2024-04-26 14:55:50.607247] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:04.896 [2024-04-26 14:55:50.608966] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.153 passed 00:15:05.153 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-26 14:55:50.696216] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.153 [2024-04-26 14:55:50.771029] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:05.153 [2024-04-26 14:55:50.795027] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:05.153 [2024-04-26 14:55:50.800135] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.153 passed 00:15:05.153 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-26 14:55:50.881368] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.153 [2024-04-26 14:55:50.882661] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:05.153 [2024-04-26 14:55:50.882700] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:05.153 [2024-04-26 14:55:50.884388] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.410 passed 00:15:05.410 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-26 14:55:50.971590] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.410 [2024-04-26 14:55:51.067058] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:05.410 [2024-04-26 14:55:51.075041] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:05.410 [2024-04-26 14:55:51.083031] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:05.410 [2024-04-26 14:55:51.091041] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:05.410 [2024-04-26 14:55:51.120157] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.666 passed 00:15:05.666 Test: admin_create_io_sq_verify_pc ...[2024-04-26 14:55:51.202080] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.666 [2024-04-26 14:55:51.222043] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:05.666 [2024-04-26 14:55:51.239421] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.666 passed 00:15:05.666 Test: admin_create_io_qp_max_qps ...[2024-04-26 14:55:51.321944] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:07.034 [2024-04-26 14:55:52.420036] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:07.290 [2024-04-26 14:55:52.804347] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:07.291 passed 00:15:07.291 Test: admin_create_io_sq_shared_cq ...[2024-04-26 14:55:52.885613] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:07.291 [2024-04-26 14:55:53.018028] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:07.548 [2024-04-26 14:55:53.055124] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:07.548 passed 00:15:07.548 00:15:07.548 Run Summary: Type Total Ran Passed Failed Inactive 00:15:07.548 suites 1 1 n/a 0 0 00:15:07.548 tests 18 18 18 0 0 00:15:07.548 asserts 360 360 360 0 n/a 00:15:07.548 00:15:07.548 Elapsed time = 1.583 seconds 00:15:07.548 14:55:53 -- compliance/compliance.sh@42 -- # killprocess 3747654 00:15:07.548 14:55:53 -- common/autotest_common.sh@936 -- # '[' -z 3747654 ']' 00:15:07.549 14:55:53 -- common/autotest_common.sh@940 -- # kill -0 3747654 00:15:07.549 14:55:53 -- common/autotest_common.sh@941 -- # uname 00:15:07.549 14:55:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:07.549 14:55:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3747654 00:15:07.549 14:55:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:07.549 14:55:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:07.549 14:55:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3747654' 00:15:07.549 killing process with pid 3747654 00:15:07.549 14:55:53 -- common/autotest_common.sh@955 -- # kill 3747654 00:15:07.549 14:55:53 -- common/autotest_common.sh@960 -- # wait 3747654 00:15:07.806 14:55:53 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:07.806 14:55:53 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:07.806 00:15:07.806 real 0m5.767s 00:15:07.806 user 0m16.278s 00:15:07.806 sys 0m0.558s 00:15:07.806 14:55:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:07.806 14:55:53 -- common/autotest_common.sh@10 -- # set +x 00:15:07.806 ************************************ 00:15:07.806 END TEST nvmf_vfio_user_nvme_compliance 00:15:07.806 ************************************ 00:15:07.806 14:55:53 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:07.806 14:55:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:07.806 14:55:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:07.806 14:55:53 -- common/autotest_common.sh@10 -- # set +x 00:15:07.806 ************************************ 00:15:07.806 START TEST nvmf_vfio_user_fuzz 00:15:07.806 ************************************ 00:15:07.806 14:55:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:08.065 * Looking for test storage... 00:15:08.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:08.065 14:55:53 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:08.065 14:55:53 -- nvmf/common.sh@7 -- # uname -s 00:15:08.065 14:55:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.065 14:55:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.065 14:55:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.065 14:55:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.065 14:55:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.065 14:55:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.065 14:55:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.065 14:55:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.065 14:55:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.065 14:55:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.065 14:55:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:08.065 14:55:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:08.065 14:55:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.065 14:55:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.065 14:55:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:08.065 14:55:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:08.065 14:55:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:08.065 14:55:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.065 14:55:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.065 14:55:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.065 14:55:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.065 14:55:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.065 14:55:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.065 14:55:53 -- paths/export.sh@5 -- # export PATH 00:15:08.065 14:55:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.065 14:55:53 -- nvmf/common.sh@47 -- # : 0 00:15:08.065 14:55:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:08.065 14:55:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:08.065 14:55:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:08.065 14:55:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.065 14:55:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.065 14:55:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:08.065 14:55:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:08.065 14:55:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:08.065 14:55:53 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:08.065 14:55:53 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:08.065 14:55:53 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:08.065 14:55:53 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:08.065 14:55:53 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:08.065 14:55:53 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:08.065 14:55:53 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:08.065 14:55:53 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3748390 00:15:08.065 14:55:53 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:08.065 14:55:53 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3748390' 00:15:08.065 Process pid: 3748390 00:15:08.065 14:55:53 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:08.065 14:55:53 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3748390 00:15:08.065 14:55:53 -- common/autotest_common.sh@817 -- # '[' -z 3748390 ']' 00:15:08.065 14:55:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.065 14:55:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:08.065 14:55:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.065 14:55:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:08.065 14:55:53 -- common/autotest_common.sh@10 -- # set +x 00:15:08.323 14:55:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:08.323 14:55:53 -- common/autotest_common.sh@850 -- # return 0 00:15:08.323 14:55:53 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:09.257 14:55:54 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:09.257 14:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.257 14:55:54 -- common/autotest_common.sh@10 -- # set +x 00:15:09.257 14:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.257 14:55:54 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:09.257 14:55:54 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:09.257 14:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.257 14:55:54 -- common/autotest_common.sh@10 -- # set +x 00:15:09.257 malloc0 00:15:09.257 14:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.257 14:55:54 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:09.257 14:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.257 14:55:54 -- common/autotest_common.sh@10 -- # set +x 00:15:09.257 14:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.257 14:55:54 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:09.257 14:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.257 14:55:54 -- common/autotest_common.sh@10 -- # set +x 00:15:09.257 14:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.257 14:55:54 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:09.257 14:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.257 14:55:54 -- common/autotest_common.sh@10 -- # set +x 00:15:09.257 14:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.257 14:55:54 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:09.257 14:55:54 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:41.325 Fuzzing completed. Shutting down the fuzz application 00:15:41.325 00:15:41.325 Dumping successful admin opcodes: 00:15:41.325 8, 9, 10, 24, 00:15:41.325 Dumping successful io opcodes: 00:15:41.325 0, 00:15:41.325 NS: 0x200003a1ef00 I/O qp, Total commands completed: 520271, total successful commands: 2002, random_seed: 3105918976 00:15:41.325 NS: 0x200003a1ef00 admin qp, Total commands completed: 125284, total successful commands: 1025, random_seed: 1149408832 00:15:41.325 14:56:25 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:41.325 14:56:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:41.325 14:56:25 -- common/autotest_common.sh@10 -- # set +x 00:15:41.325 14:56:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:41.325 14:56:25 -- target/vfio_user_fuzz.sh@46 -- # killprocess 3748390 00:15:41.325 14:56:25 -- common/autotest_common.sh@936 -- # '[' -z 3748390 ']' 00:15:41.325 14:56:25 -- common/autotest_common.sh@940 -- # kill -0 3748390 00:15:41.325 14:56:25 -- common/autotest_common.sh@941 -- # uname 00:15:41.325 14:56:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:41.325 14:56:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3748390 00:15:41.325 14:56:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:41.325 14:56:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:41.325 14:56:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3748390' 00:15:41.325 killing process with pid 3748390 00:15:41.325 14:56:25 -- common/autotest_common.sh@955 -- # kill 3748390 00:15:41.325 14:56:25 -- common/autotest_common.sh@960 -- # wait 3748390 00:15:41.325 14:56:25 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:41.325 14:56:25 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:41.325 00:15:41.325 real 0m32.233s 00:15:41.325 user 0m29.431s 00:15:41.325 sys 0m27.902s 00:15:41.325 14:56:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:41.325 14:56:25 -- common/autotest_common.sh@10 -- # set +x 00:15:41.325 ************************************ 00:15:41.325 END TEST nvmf_vfio_user_fuzz 00:15:41.325 ************************************ 00:15:41.325 14:56:25 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:41.325 14:56:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:41.325 14:56:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:41.325 14:56:25 -- common/autotest_common.sh@10 -- # set +x 00:15:41.325 ************************************ 00:15:41.325 START TEST nvmf_host_management 00:15:41.325 ************************************ 00:15:41.325 14:56:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:41.325 * Looking for test storage... 00:15:41.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:41.325 14:56:25 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:41.325 14:56:25 -- nvmf/common.sh@7 -- # uname -s 00:15:41.325 14:56:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.325 14:56:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.325 14:56:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.325 14:56:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.325 14:56:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.325 14:56:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.325 14:56:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.325 14:56:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.325 14:56:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.325 14:56:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.325 14:56:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:41.325 14:56:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:41.325 14:56:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.325 14:56:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.325 14:56:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:41.325 14:56:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.325 14:56:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:41.325 14:56:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.325 14:56:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.325 14:56:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.325 14:56:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.325 14:56:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.325 14:56:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.325 14:56:25 -- paths/export.sh@5 -- # export PATH 00:15:41.325 14:56:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.325 14:56:25 -- nvmf/common.sh@47 -- # : 0 00:15:41.325 14:56:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:41.325 14:56:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:41.325 14:56:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.325 14:56:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.325 14:56:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.325 14:56:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:41.325 14:56:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:41.325 14:56:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:41.325 14:56:25 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:41.325 14:56:25 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:41.325 14:56:25 -- target/host_management.sh@105 -- # nvmftestinit 00:15:41.325 14:56:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:41.325 14:56:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.325 14:56:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:41.325 14:56:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:41.325 14:56:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:41.325 14:56:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.325 14:56:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.325 14:56:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.325 14:56:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:41.325 14:56:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:41.325 14:56:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:41.325 14:56:25 -- common/autotest_common.sh@10 -- # set +x 00:15:42.296 14:56:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:42.296 14:56:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:42.296 14:56:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:42.296 14:56:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:42.296 14:56:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:42.296 14:56:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:42.296 14:56:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:42.296 14:56:27 -- nvmf/common.sh@295 -- # net_devs=() 00:15:42.296 14:56:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:42.296 14:56:27 -- nvmf/common.sh@296 -- # e810=() 00:15:42.296 14:56:27 -- nvmf/common.sh@296 -- # local -ga e810 00:15:42.296 14:56:27 -- nvmf/common.sh@297 -- # x722=() 00:15:42.296 14:56:27 -- nvmf/common.sh@297 -- # local -ga x722 00:15:42.296 14:56:27 -- nvmf/common.sh@298 -- # mlx=() 00:15:42.296 14:56:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:42.296 14:56:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:42.296 14:56:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:42.296 14:56:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:42.296 14:56:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:42.296 14:56:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:42.296 14:56:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:42.296 14:56:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:42.296 14:56:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:42.296 14:56:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:42.296 14:56:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:42.296 14:56:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:42.296 14:56:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:42.296 14:56:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:42.296 14:56:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:42.296 14:56:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:42.296 14:56:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:42.296 14:56:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:42.296 14:56:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:42.296 14:56:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:42.296 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:42.296 14:56:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:42.296 14:56:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:42.296 14:56:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.296 14:56:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.296 14:56:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:42.296 14:56:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:42.296 14:56:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:42.296 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:42.296 14:56:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:42.296 14:56:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:42.296 14:56:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.296 14:56:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.296 14:56:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:42.296 14:56:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:42.296 14:56:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:42.296 14:56:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:42.296 14:56:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:42.296 14:56:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.296 14:56:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:42.296 14:56:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.296 14:56:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:42.296 Found net devices under 0000:84:00.0: cvl_0_0 00:15:42.296 14:56:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.296 14:56:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:42.296 14:56:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.296 14:56:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:42.296 14:56:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.296 14:56:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:42.296 Found net devices under 0000:84:00.1: cvl_0_1 00:15:42.296 14:56:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.296 14:56:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:42.296 14:56:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:42.296 14:56:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:42.296 14:56:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:42.296 14:56:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:42.296 14:56:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.296 14:56:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:42.296 14:56:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:42.296 14:56:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:42.296 14:56:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:42.296 14:56:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:42.296 14:56:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:42.296 14:56:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:42.296 14:56:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.296 14:56:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:42.296 14:56:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:42.296 14:56:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:42.296 14:56:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:42.296 14:56:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:42.296 14:56:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:42.296 14:56:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:42.296 14:56:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:42.296 14:56:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:42.296 14:56:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:42.296 14:56:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:42.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:15:42.296 00:15:42.296 --- 10.0.0.2 ping statistics --- 00:15:42.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.296 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:15:42.296 14:56:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:42.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:15:42.296 00:15:42.296 --- 10.0.0.1 ping statistics --- 00:15:42.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.296 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:15:42.296 14:56:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.296 14:56:28 -- nvmf/common.sh@411 -- # return 0 00:15:42.296 14:56:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:42.297 14:56:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.297 14:56:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:42.297 14:56:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:42.297 14:56:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.297 14:56:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:42.297 14:56:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:42.297 14:56:28 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:15:42.297 14:56:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:42.297 14:56:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:42.297 14:56:28 -- common/autotest_common.sh@10 -- # set +x 00:15:42.555 ************************************ 00:15:42.555 START TEST nvmf_host_management 00:15:42.555 ************************************ 00:15:42.555 14:56:28 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:15:42.555 14:56:28 -- target/host_management.sh@69 -- # starttarget 00:15:42.555 14:56:28 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:42.555 14:56:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:42.555 14:56:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:42.555 14:56:28 -- common/autotest_common.sh@10 -- # set +x 00:15:42.555 14:56:28 -- nvmf/common.sh@470 -- # nvmfpid=3753865 00:15:42.555 14:56:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:42.555 14:56:28 -- nvmf/common.sh@471 -- # waitforlisten 3753865 00:15:42.555 14:56:28 -- common/autotest_common.sh@817 -- # '[' -z 3753865 ']' 00:15:42.555 14:56:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.555 14:56:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:42.555 14:56:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.555 14:56:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:42.555 14:56:28 -- common/autotest_common.sh@10 -- # set +x 00:15:42.555 [2024-04-26 14:56:28.165923] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:15:42.556 [2024-04-26 14:56:28.165998] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.556 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.556 [2024-04-26 14:56:28.205978] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:42.556 [2024-04-26 14:56:28.232794] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:42.814 [2024-04-26 14:56:28.319806] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.814 [2024-04-26 14:56:28.319876] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.814 [2024-04-26 14:56:28.319890] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.814 [2024-04-26 14:56:28.319900] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.814 [2024-04-26 14:56:28.319910] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.814 [2024-04-26 14:56:28.319998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.814 [2024-04-26 14:56:28.320061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:42.814 [2024-04-26 14:56:28.320131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:42.814 [2024-04-26 14:56:28.320133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.814 14:56:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:42.814 14:56:28 -- common/autotest_common.sh@850 -- # return 0 00:15:42.814 14:56:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:42.814 14:56:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:42.814 14:56:28 -- common/autotest_common.sh@10 -- # set +x 00:15:42.814 14:56:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.814 14:56:28 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:42.814 14:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.814 14:56:28 -- common/autotest_common.sh@10 -- # set +x 00:15:42.814 [2024-04-26 14:56:28.472738] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:42.814 14:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.814 14:56:28 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:42.814 14:56:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:42.814 14:56:28 -- common/autotest_common.sh@10 -- # set +x 00:15:42.814 14:56:28 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:42.814 14:56:28 -- target/host_management.sh@23 -- # cat 00:15:42.814 14:56:28 -- target/host_management.sh@30 -- # rpc_cmd 00:15:42.814 14:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.814 14:56:28 -- common/autotest_common.sh@10 -- # set +x 00:15:42.814 Malloc0 00:15:42.814 [2024-04-26 14:56:28.533823] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.814 14:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.814 14:56:28 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:42.814 14:56:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:42.814 14:56:28 -- common/autotest_common.sh@10 -- # set +x 00:15:43.074 14:56:28 -- target/host_management.sh@73 -- # perfpid=3754021 00:15:43.075 14:56:28 -- target/host_management.sh@74 -- # waitforlisten 3754021 /var/tmp/bdevperf.sock 00:15:43.075 14:56:28 -- common/autotest_common.sh@817 -- # '[' -z 3754021 ']' 00:15:43.075 14:56:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:43.075 14:56:28 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:43.075 14:56:28 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:43.075 14:56:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:43.075 14:56:28 -- nvmf/common.sh@521 -- # config=() 00:15:43.075 14:56:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:43.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:43.075 14:56:28 -- nvmf/common.sh@521 -- # local subsystem config 00:15:43.075 14:56:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:43.075 14:56:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:43.075 14:56:28 -- common/autotest_common.sh@10 -- # set +x 00:15:43.075 14:56:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:43.075 { 00:15:43.075 "params": { 00:15:43.075 "name": "Nvme$subsystem", 00:15:43.075 "trtype": "$TEST_TRANSPORT", 00:15:43.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:43.075 "adrfam": "ipv4", 00:15:43.075 "trsvcid": "$NVMF_PORT", 00:15:43.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:43.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:43.075 "hdgst": ${hdgst:-false}, 00:15:43.075 "ddgst": ${ddgst:-false} 00:15:43.075 }, 00:15:43.075 "method": "bdev_nvme_attach_controller" 00:15:43.075 } 00:15:43.075 EOF 00:15:43.075 )") 00:15:43.075 14:56:28 -- nvmf/common.sh@543 -- # cat 00:15:43.075 14:56:28 -- nvmf/common.sh@545 -- # jq . 00:15:43.075 14:56:28 -- nvmf/common.sh@546 -- # IFS=, 00:15:43.075 14:56:28 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:43.075 "params": { 00:15:43.075 "name": "Nvme0", 00:15:43.075 "trtype": "tcp", 00:15:43.075 "traddr": "10.0.0.2", 00:15:43.075 "adrfam": "ipv4", 00:15:43.075 "trsvcid": "4420", 00:15:43.075 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:43.075 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:43.075 "hdgst": false, 00:15:43.075 "ddgst": false 00:15:43.075 }, 00:15:43.075 "method": "bdev_nvme_attach_controller" 00:15:43.075 }' 00:15:43.075 [2024-04-26 14:56:28.612475] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:15:43.075 [2024-04-26 14:56:28.612561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754021 ] 00:15:43.075 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.075 [2024-04-26 14:56:28.645854] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:43.075 [2024-04-26 14:56:28.675368] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.075 [2024-04-26 14:56:28.760668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.333 Running I/O for 10 seconds... 00:15:43.333 14:56:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:43.333 14:56:29 -- common/autotest_common.sh@850 -- # return 0 00:15:43.333 14:56:29 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:43.333 14:56:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.333 14:56:29 -- common/autotest_common.sh@10 -- # set +x 00:15:43.333 14:56:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.333 14:56:29 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:43.333 14:56:29 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:43.333 14:56:29 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:43.333 14:56:29 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:43.333 14:56:29 -- target/host_management.sh@52 -- # local ret=1 00:15:43.333 14:56:29 -- target/host_management.sh@53 -- # local i 00:15:43.333 14:56:29 -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:43.333 14:56:29 -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:43.333 14:56:29 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:43.333 14:56:29 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:43.333 14:56:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.333 14:56:29 -- common/autotest_common.sh@10 -- # set +x 00:15:43.333 14:56:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.591 14:56:29 -- target/host_management.sh@55 -- # read_io_count=67 00:15:43.591 14:56:29 -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:15:43.591 14:56:29 -- target/host_management.sh@62 -- # sleep 0.25 00:15:43.591 14:56:29 -- target/host_management.sh@54 -- # (( i-- )) 00:15:43.591 14:56:29 -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:43.591 14:56:29 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:43.591 14:56:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.591 14:56:29 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:43.591 14:56:29 -- common/autotest_common.sh@10 -- # set +x 00:15:43.852 14:56:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.852 14:56:29 -- target/host_management.sh@55 -- # read_io_count=515 00:15:43.852 14:56:29 -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:15:43.852 14:56:29 -- target/host_management.sh@59 -- # ret=0 00:15:43.852 14:56:29 -- target/host_management.sh@60 -- # break 00:15:43.852 14:56:29 -- target/host_management.sh@64 -- # return 0 00:15:43.852 14:56:29 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:43.852 14:56:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.852 14:56:29 -- common/autotest_common.sh@10 -- # set +x 00:15:43.852 [2024-04-26 14:56:29.368748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.852 [2024-04-26 14:56:29.368796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.852 [2024-04-26 14:56:29.368824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.852 [2024-04-26 14:56:29.368841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.852 [2024-04-26 14:56:29.368859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.852 [2024-04-26 14:56:29.368874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.852 [2024-04-26 14:56:29.368890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.368906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.368924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.368938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.368955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.368971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.368988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.369977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.369993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.370033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.370053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.370073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.370089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.370105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.370121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.370139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.370156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.370174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.370191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.370208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.370224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.370241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.370257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.370275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.370292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.853 [2024-04-26 14:56:29.370325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.853 [2024-04-26 14:56:29.370341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.370373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.370406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.370439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.370472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.370508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.370542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.370575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.370608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.370641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.370674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.370707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.370741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.370774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.370807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.370841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.370874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.370907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.370945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.370978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.370995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.371011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.371055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.854 [2024-04-26 14:56:29.371073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.371160] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1864bd0 was disconnected and freed. reset controller. 00:15:43.854 [2024-04-26 14:56:29.372331] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:43.854 14:56:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.854 14:56:29 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:43.854 14:56:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.854 14:56:29 -- common/autotest_common.sh@10 -- # set +x 00:15:43.854 task offset: 77056 on job bdev=Nvme0n1 fails 00:15:43.854 00:15:43.854 Latency(us) 00:15:43.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.854 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:43.854 Job: Nvme0n1 ended in about 0.39 seconds with error 00:15:43.854 Verification LBA range: start 0x0 length 0x400 00:15:43.854 Nvme0n1 : 0.39 1464.28 91.52 162.70 0.00 38204.39 2973.39 34952.53 00:15:43.854 =================================================================================================================== 00:15:43.854 Total : 1464.28 91.52 162.70 0.00 38204.39 2973.39 34952.53 00:15:43.854 [2024-04-26 14:56:29.374235] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:43.854 [2024-04-26 14:56:29.374266] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1453e50 (9): Bad file descriptor 00:15:43.854 [2024-04-26 14:56:29.376292] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:15:43.854 [2024-04-26 14:56:29.376513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:43.854 [2024-04-26 14:56:29.376541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.854 [2024-04-26 14:56:29.376563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:15:43.854 [2024-04-26 14:56:29.376578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:15:43.854 [2024-04-26 14:56:29.376592] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:15:43.854 [2024-04-26 14:56:29.376605] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1453e50 00:15:43.854 [2024-04-26 14:56:29.376646] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1453e50 (9): Bad file descriptor 00:15:43.854 [2024-04-26 14:56:29.376671] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:43.854 [2024-04-26 14:56:29.376686] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:43.854 [2024-04-26 14:56:29.376702] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:43.854 [2024-04-26 14:56:29.376725] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:43.854 14:56:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.854 14:56:29 -- target/host_management.sh@87 -- # sleep 1 00:15:44.789 14:56:30 -- target/host_management.sh@91 -- # kill -9 3754021 00:15:44.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3754021) - No such process 00:15:44.789 14:56:30 -- target/host_management.sh@91 -- # true 00:15:44.789 14:56:30 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:44.789 14:56:30 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:44.789 14:56:30 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:44.789 14:56:30 -- nvmf/common.sh@521 -- # config=() 00:15:44.789 14:56:30 -- nvmf/common.sh@521 -- # local subsystem config 00:15:44.789 14:56:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:44.789 14:56:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:44.789 { 00:15:44.789 "params": { 00:15:44.789 "name": "Nvme$subsystem", 00:15:44.789 "trtype": "$TEST_TRANSPORT", 00:15:44.789 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:44.789 "adrfam": "ipv4", 00:15:44.789 "trsvcid": "$NVMF_PORT", 00:15:44.789 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:44.789 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:44.789 "hdgst": ${hdgst:-false}, 00:15:44.789 "ddgst": ${ddgst:-false} 00:15:44.789 }, 00:15:44.789 "method": "bdev_nvme_attach_controller" 00:15:44.789 } 00:15:44.789 EOF 00:15:44.789 )") 00:15:44.789 14:56:30 -- nvmf/common.sh@543 -- # cat 00:15:44.789 14:56:30 -- nvmf/common.sh@545 -- # jq . 00:15:44.789 14:56:30 -- nvmf/common.sh@546 -- # IFS=, 00:15:44.789 14:56:30 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:44.789 "params": { 00:15:44.789 "name": "Nvme0", 00:15:44.789 "trtype": "tcp", 00:15:44.789 "traddr": "10.0.0.2", 00:15:44.789 "adrfam": "ipv4", 00:15:44.789 "trsvcid": "4420", 00:15:44.789 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:44.789 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:44.789 "hdgst": false, 00:15:44.789 "ddgst": false 00:15:44.789 }, 00:15:44.789 "method": "bdev_nvme_attach_controller" 00:15:44.789 }' 00:15:44.789 [2024-04-26 14:56:30.429161] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:15:44.789 [2024-04-26 14:56:30.429234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754183 ] 00:15:44.789 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.789 [2024-04-26 14:56:30.465315] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:44.789 [2024-04-26 14:56:30.496068] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.046 [2024-04-26 14:56:30.583278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.306 Running I/O for 1 seconds... 00:15:46.239 00:15:46.239 Latency(us) 00:15:46.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.239 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:46.239 Verification LBA range: start 0x0 length 0x400 00:15:46.239 Nvme0n1 : 1.02 1502.00 93.88 0.00 0.00 41955.44 6893.42 34369.99 00:15:46.239 =================================================================================================================== 00:15:46.239 Total : 1502.00 93.88 0.00 0.00 41955.44 6893.42 34369.99 00:15:46.497 14:56:32 -- target/host_management.sh@102 -- # stoptarget 00:15:46.497 14:56:32 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:46.497 14:56:32 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:46.497 14:56:32 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:46.497 14:56:32 -- target/host_management.sh@40 -- # nvmftestfini 00:15:46.497 14:56:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:46.497 14:56:32 -- nvmf/common.sh@117 -- # sync 00:15:46.497 14:56:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:46.497 14:56:32 -- nvmf/common.sh@120 -- # set +e 00:15:46.497 14:56:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:46.497 14:56:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:46.497 rmmod nvme_tcp 00:15:46.497 rmmod nvme_fabrics 00:15:46.497 rmmod nvme_keyring 00:15:46.497 14:56:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:46.497 14:56:32 -- nvmf/common.sh@124 -- # set -e 00:15:46.497 14:56:32 -- nvmf/common.sh@125 -- # return 0 00:15:46.497 14:56:32 -- nvmf/common.sh@478 -- # '[' -n 3753865 ']' 00:15:46.497 14:56:32 -- nvmf/common.sh@479 -- # killprocess 3753865 00:15:46.497 14:56:32 -- common/autotest_common.sh@936 -- # '[' -z 3753865 ']' 00:15:46.497 14:56:32 -- common/autotest_common.sh@940 -- # kill -0 3753865 00:15:46.497 14:56:32 -- common/autotest_common.sh@941 -- # uname 00:15:46.497 14:56:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:46.497 14:56:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3753865 00:15:46.497 14:56:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:46.497 14:56:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:46.497 14:56:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3753865' 00:15:46.497 killing process with pid 3753865 00:15:46.497 14:56:32 -- common/autotest_common.sh@955 -- # kill 3753865 00:15:46.497 14:56:32 -- common/autotest_common.sh@960 -- # wait 3753865 00:15:46.755 [2024-04-26 14:56:32.426262] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:46.755 14:56:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:46.755 14:56:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:46.755 14:56:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:46.755 14:56:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:46.755 14:56:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:46.755 14:56:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.755 14:56:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.755 14:56:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.292 14:56:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:49.292 00:15:49.292 real 0m6.379s 00:15:49.292 user 0m18.596s 00:15:49.292 sys 0m1.252s 00:15:49.292 14:56:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:49.292 14:56:34 -- common/autotest_common.sh@10 -- # set +x 00:15:49.292 ************************************ 00:15:49.292 END TEST nvmf_host_management 00:15:49.292 ************************************ 00:15:49.292 14:56:34 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:15:49.292 00:15:49.292 real 0m8.637s 00:15:49.292 user 0m19.424s 00:15:49.292 sys 0m2.690s 00:15:49.292 14:56:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:49.292 14:56:34 -- common/autotest_common.sh@10 -- # set +x 00:15:49.292 ************************************ 00:15:49.292 END TEST nvmf_host_management 00:15:49.292 ************************************ 00:15:49.292 14:56:34 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:49.292 14:56:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:49.292 14:56:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:49.292 14:56:34 -- common/autotest_common.sh@10 -- # set +x 00:15:49.292 ************************************ 00:15:49.292 START TEST nvmf_lvol 00:15:49.292 ************************************ 00:15:49.292 14:56:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:49.292 * Looking for test storage... 00:15:49.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.292 14:56:34 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.292 14:56:34 -- nvmf/common.sh@7 -- # uname -s 00:15:49.292 14:56:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.292 14:56:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.292 14:56:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.292 14:56:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.292 14:56:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.292 14:56:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.292 14:56:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.292 14:56:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.292 14:56:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.292 14:56:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.292 14:56:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:49.292 14:56:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:49.292 14:56:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.292 14:56:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.292 14:56:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.292 14:56:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.292 14:56:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.292 14:56:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.292 14:56:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.292 14:56:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.292 14:56:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.292 14:56:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.292 14:56:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.292 14:56:34 -- paths/export.sh@5 -- # export PATH 00:15:49.292 14:56:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.292 14:56:34 -- nvmf/common.sh@47 -- # : 0 00:15:49.292 14:56:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:49.292 14:56:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:49.292 14:56:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.292 14:56:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.292 14:56:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.292 14:56:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:49.292 14:56:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:49.292 14:56:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:49.292 14:56:34 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:49.292 14:56:34 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:49.292 14:56:34 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:49.292 14:56:34 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:49.292 14:56:34 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.292 14:56:34 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:49.292 14:56:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:49.292 14:56:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.292 14:56:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:49.292 14:56:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:49.292 14:56:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:49.292 14:56:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.292 14:56:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.292 14:56:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.292 14:56:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:49.292 14:56:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:49.292 14:56:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:49.292 14:56:34 -- common/autotest_common.sh@10 -- # set +x 00:15:51.195 14:56:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:51.195 14:56:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:51.195 14:56:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:51.195 14:56:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:51.195 14:56:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:51.195 14:56:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:51.195 14:56:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:51.195 14:56:36 -- nvmf/common.sh@295 -- # net_devs=() 00:15:51.195 14:56:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:51.195 14:56:36 -- nvmf/common.sh@296 -- # e810=() 00:15:51.195 14:56:36 -- nvmf/common.sh@296 -- # local -ga e810 00:15:51.195 14:56:36 -- nvmf/common.sh@297 -- # x722=() 00:15:51.195 14:56:36 -- nvmf/common.sh@297 -- # local -ga x722 00:15:51.195 14:56:36 -- nvmf/common.sh@298 -- # mlx=() 00:15:51.195 14:56:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:51.195 14:56:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.195 14:56:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.195 14:56:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.195 14:56:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.195 14:56:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.195 14:56:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.195 14:56:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.195 14:56:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.195 14:56:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.195 14:56:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.195 14:56:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.195 14:56:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:51.195 14:56:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:51.195 14:56:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:51.195 14:56:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:51.195 14:56:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:51.195 14:56:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:51.195 14:56:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.195 14:56:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:51.195 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:51.195 14:56:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.195 14:56:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.195 14:56:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.195 14:56:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.195 14:56:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.195 14:56:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.195 14:56:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:51.195 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:51.195 14:56:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.195 14:56:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.195 14:56:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.195 14:56:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.195 14:56:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.195 14:56:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:51.195 14:56:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:51.195 14:56:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:51.195 14:56:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.195 14:56:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.195 14:56:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:51.195 14:56:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.195 14:56:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:51.195 Found net devices under 0000:84:00.0: cvl_0_0 00:15:51.195 14:56:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.195 14:56:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.195 14:56:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.195 14:56:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:51.195 14:56:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.195 14:56:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:51.195 Found net devices under 0000:84:00.1: cvl_0_1 00:15:51.195 14:56:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.195 14:56:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:51.195 14:56:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:51.195 14:56:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:51.195 14:56:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:51.195 14:56:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:51.195 14:56:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.195 14:56:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.195 14:56:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:51.195 14:56:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:51.195 14:56:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:51.195 14:56:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:51.195 14:56:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:51.195 14:56:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:51.195 14:56:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.195 14:56:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:51.195 14:56:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:51.195 14:56:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:51.195 14:56:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:51.195 14:56:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:51.195 14:56:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:51.195 14:56:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:51.195 14:56:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:51.195 14:56:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:51.195 14:56:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:51.195 14:56:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:51.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:15:51.195 00:15:51.195 --- 10.0.0.2 ping statistics --- 00:15:51.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.195 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:15:51.195 14:56:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:51.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:15:51.195 00:15:51.195 --- 10.0.0.1 ping statistics --- 00:15:51.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.195 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:15:51.195 14:56:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.195 14:56:36 -- nvmf/common.sh@411 -- # return 0 00:15:51.195 14:56:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:51.195 14:56:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.195 14:56:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:51.195 14:56:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:51.195 14:56:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.195 14:56:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:51.195 14:56:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:51.454 14:56:36 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:51.454 14:56:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:51.454 14:56:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:51.454 14:56:36 -- common/autotest_common.sh@10 -- # set +x 00:15:51.454 14:56:36 -- nvmf/common.sh@470 -- # nvmfpid=3756420 00:15:51.454 14:56:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:51.454 14:56:36 -- nvmf/common.sh@471 -- # waitforlisten 3756420 00:15:51.454 14:56:36 -- common/autotest_common.sh@817 -- # '[' -z 3756420 ']' 00:15:51.454 14:56:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.454 14:56:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:51.454 14:56:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.454 14:56:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:51.454 14:56:36 -- common/autotest_common.sh@10 -- # set +x 00:15:51.454 [2024-04-26 14:56:37.001004] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:15:51.454 [2024-04-26 14:56:37.001110] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.454 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.454 [2024-04-26 14:56:37.040108] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:51.454 [2024-04-26 14:56:37.070446] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:51.454 [2024-04-26 14:56:37.160842] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.454 [2024-04-26 14:56:37.160899] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.454 [2024-04-26 14:56:37.160923] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.454 [2024-04-26 14:56:37.160934] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.454 [2024-04-26 14:56:37.160945] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.454 [2024-04-26 14:56:37.161004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.454 [2024-04-26 14:56:37.161060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.454 [2024-04-26 14:56:37.161066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.713 14:56:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:51.713 14:56:37 -- common/autotest_common.sh@850 -- # return 0 00:15:51.713 14:56:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:51.713 14:56:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:51.713 14:56:37 -- common/autotest_common.sh@10 -- # set +x 00:15:51.713 14:56:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.713 14:56:37 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:51.970 [2024-04-26 14:56:37.559392] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.970 14:56:37 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:52.229 14:56:37 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:52.229 14:56:37 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:52.486 14:56:38 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:52.486 14:56:38 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:52.744 14:56:38 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:53.001 14:56:38 -- target/nvmf_lvol.sh@29 -- # lvs=e44298aa-bae8-4a00-bd37-ccd1552b003c 00:15:53.001 14:56:38 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e44298aa-bae8-4a00-bd37-ccd1552b003c lvol 20 00:15:53.259 14:56:38 -- target/nvmf_lvol.sh@32 -- # lvol=7c19baac-87a7-49cd-8977-d86a068ec680 00:15:53.259 14:56:38 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:53.824 14:56:39 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7c19baac-87a7-49cd-8977-d86a068ec680 00:15:53.824 14:56:39 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:54.082 [2024-04-26 14:56:39.753917] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.082 14:56:39 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:54.340 14:56:40 -- target/nvmf_lvol.sh@42 -- # perf_pid=3756845 00:15:54.340 14:56:40 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:54.340 14:56:40 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:54.340 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.717 14:56:41 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7c19baac-87a7-49cd-8977-d86a068ec680 MY_SNAPSHOT 00:15:55.717 14:56:41 -- target/nvmf_lvol.sh@47 -- # snapshot=eacd7415-9b58-4f24-abcb-3c83438ce908 00:15:55.717 14:56:41 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7c19baac-87a7-49cd-8977-d86a068ec680 30 00:15:55.974 14:56:41 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone eacd7415-9b58-4f24-abcb-3c83438ce908 MY_CLONE 00:15:56.595 14:56:42 -- target/nvmf_lvol.sh@49 -- # clone=4de95f36-0470-40fb-8afc-6258d63ec3b2 00:15:56.595 14:56:42 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4de95f36-0470-40fb-8afc-6258d63ec3b2 00:15:57.164 14:56:42 -- target/nvmf_lvol.sh@53 -- # wait 3756845 00:16:05.282 Initializing NVMe Controllers 00:16:05.282 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:05.282 Controller IO queue size 128, less than required. 00:16:05.282 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:05.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:05.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:05.283 Initialization complete. Launching workers. 00:16:05.283 ======================================================== 00:16:05.283 Latency(us) 00:16:05.283 Device Information : IOPS MiB/s Average min max 00:16:05.283 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10452.13 40.83 12249.50 2062.73 125015.58 00:16:05.283 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10389.83 40.59 12328.29 2142.43 58522.15 00:16:05.283 ======================================================== 00:16:05.283 Total : 20841.96 81.41 12288.78 2062.73 125015.58 00:16:05.283 00:16:05.283 14:56:50 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:05.283 14:56:50 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7c19baac-87a7-49cd-8977-d86a068ec680 00:16:05.541 14:56:51 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e44298aa-bae8-4a00-bd37-ccd1552b003c 00:16:05.801 14:56:51 -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:05.801 14:56:51 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:05.801 14:56:51 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:05.801 14:56:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:05.801 14:56:51 -- nvmf/common.sh@117 -- # sync 00:16:05.801 14:56:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:05.801 14:56:51 -- nvmf/common.sh@120 -- # set +e 00:16:05.801 14:56:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:05.801 14:56:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:05.801 rmmod nvme_tcp 00:16:05.801 rmmod nvme_fabrics 00:16:05.801 rmmod nvme_keyring 00:16:05.801 14:56:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:05.801 14:56:51 -- nvmf/common.sh@124 -- # set -e 00:16:05.801 14:56:51 -- nvmf/common.sh@125 -- # return 0 00:16:05.801 14:56:51 -- nvmf/common.sh@478 -- # '[' -n 3756420 ']' 00:16:05.801 14:56:51 -- nvmf/common.sh@479 -- # killprocess 3756420 00:16:05.801 14:56:51 -- common/autotest_common.sh@936 -- # '[' -z 3756420 ']' 00:16:05.801 14:56:51 -- common/autotest_common.sh@940 -- # kill -0 3756420 00:16:05.801 14:56:51 -- common/autotest_common.sh@941 -- # uname 00:16:05.801 14:56:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:05.801 14:56:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3756420 00:16:05.801 14:56:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:05.801 14:56:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:05.801 14:56:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3756420' 00:16:05.801 killing process with pid 3756420 00:16:05.801 14:56:51 -- common/autotest_common.sh@955 -- # kill 3756420 00:16:05.801 14:56:51 -- common/autotest_common.sh@960 -- # wait 3756420 00:16:06.059 14:56:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:06.059 14:56:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:06.059 14:56:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:06.059 14:56:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:06.059 14:56:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:06.059 14:56:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.059 14:56:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.059 14:56:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.596 14:56:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:08.597 00:16:08.597 real 0m19.073s 00:16:08.597 user 1m5.131s 00:16:08.597 sys 0m5.720s 00:16:08.597 14:56:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:08.597 14:56:53 -- common/autotest_common.sh@10 -- # set +x 00:16:08.597 ************************************ 00:16:08.597 END TEST nvmf_lvol 00:16:08.597 ************************************ 00:16:08.597 14:56:53 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:08.597 14:56:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:08.597 14:56:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:08.597 14:56:53 -- common/autotest_common.sh@10 -- # set +x 00:16:08.597 ************************************ 00:16:08.597 START TEST nvmf_lvs_grow 00:16:08.597 ************************************ 00:16:08.597 14:56:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:08.597 * Looking for test storage... 00:16:08.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.597 14:56:53 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.597 14:56:53 -- nvmf/common.sh@7 -- # uname -s 00:16:08.597 14:56:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.597 14:56:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.597 14:56:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.597 14:56:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.597 14:56:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.597 14:56:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.597 14:56:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.597 14:56:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.597 14:56:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.597 14:56:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.597 14:56:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:08.597 14:56:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:08.597 14:56:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.597 14:56:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.597 14:56:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.597 14:56:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.597 14:56:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.597 14:56:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.597 14:56:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.597 14:56:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.597 14:56:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.597 14:56:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.597 14:56:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.597 14:56:53 -- paths/export.sh@5 -- # export PATH 00:16:08.597 14:56:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.597 14:56:53 -- nvmf/common.sh@47 -- # : 0 00:16:08.597 14:56:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:08.597 14:56:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:08.597 14:56:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.597 14:56:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.597 14:56:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.597 14:56:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:08.597 14:56:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:08.597 14:56:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:08.597 14:56:53 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:08.597 14:56:53 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:08.597 14:56:53 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:16:08.597 14:56:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:08.597 14:56:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.597 14:56:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:08.597 14:56:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:08.597 14:56:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:08.597 14:56:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.597 14:56:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.597 14:56:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.597 14:56:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:08.597 14:56:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:08.597 14:56:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:08.597 14:56:53 -- common/autotest_common.sh@10 -- # set +x 00:16:10.504 14:56:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:10.504 14:56:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:10.504 14:56:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:10.504 14:56:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:10.504 14:56:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:10.504 14:56:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:10.504 14:56:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:10.504 14:56:55 -- nvmf/common.sh@295 -- # net_devs=() 00:16:10.504 14:56:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:10.504 14:56:55 -- nvmf/common.sh@296 -- # e810=() 00:16:10.504 14:56:55 -- nvmf/common.sh@296 -- # local -ga e810 00:16:10.504 14:56:55 -- nvmf/common.sh@297 -- # x722=() 00:16:10.504 14:56:55 -- nvmf/common.sh@297 -- # local -ga x722 00:16:10.504 14:56:55 -- nvmf/common.sh@298 -- # mlx=() 00:16:10.504 14:56:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:10.504 14:56:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:10.504 14:56:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:10.504 14:56:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:10.504 14:56:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:10.504 14:56:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:10.504 14:56:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:10.504 14:56:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:10.504 14:56:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:10.504 14:56:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:10.504 14:56:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:10.504 14:56:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:10.504 14:56:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:10.504 14:56:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:10.504 14:56:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:10.504 14:56:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:10.504 14:56:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:10.504 14:56:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:10.504 14:56:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:10.504 14:56:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:10.504 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:10.504 14:56:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:10.504 14:56:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:10.504 14:56:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.504 14:56:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.504 14:56:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:10.504 14:56:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:10.504 14:56:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:10.504 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:10.504 14:56:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:10.504 14:56:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:10.504 14:56:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.504 14:56:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.504 14:56:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:10.504 14:56:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:10.504 14:56:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:10.504 14:56:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:10.504 14:56:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:10.504 14:56:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.504 14:56:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:10.504 14:56:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.504 14:56:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:10.504 Found net devices under 0000:84:00.0: cvl_0_0 00:16:10.504 14:56:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.504 14:56:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:10.505 14:56:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.505 14:56:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:10.505 14:56:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.505 14:56:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:10.505 Found net devices under 0000:84:00.1: cvl_0_1 00:16:10.505 14:56:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.505 14:56:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:10.505 14:56:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:10.505 14:56:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:10.505 14:56:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:10.505 14:56:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:10.505 14:56:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.505 14:56:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.505 14:56:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:10.505 14:56:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:10.505 14:56:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:10.505 14:56:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:10.505 14:56:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:10.505 14:56:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:10.505 14:56:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.505 14:56:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:10.505 14:56:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:10.505 14:56:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:10.505 14:56:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:10.505 14:56:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:10.505 14:56:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:10.505 14:56:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:10.505 14:56:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:10.505 14:56:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:10.505 14:56:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:10.505 14:56:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:10.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:16:10.505 00:16:10.505 --- 10.0.0.2 ping statistics --- 00:16:10.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.505 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:16:10.505 14:56:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:10.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:16:10.505 00:16:10.505 --- 10.0.0.1 ping statistics --- 00:16:10.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.505 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:16:10.505 14:56:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.505 14:56:55 -- nvmf/common.sh@411 -- # return 0 00:16:10.505 14:56:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:10.505 14:56:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.505 14:56:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:10.505 14:56:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:10.505 14:56:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.505 14:56:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:10.505 14:56:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:10.505 14:56:55 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:16:10.505 14:56:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:10.505 14:56:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:10.505 14:56:55 -- common/autotest_common.sh@10 -- # set +x 00:16:10.505 14:56:55 -- nvmf/common.sh@470 -- # nvmfpid=3760123 00:16:10.505 14:56:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:10.505 14:56:55 -- nvmf/common.sh@471 -- # waitforlisten 3760123 00:16:10.505 14:56:55 -- common/autotest_common.sh@817 -- # '[' -z 3760123 ']' 00:16:10.505 14:56:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.505 14:56:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:10.505 14:56:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.505 14:56:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:10.505 14:56:55 -- common/autotest_common.sh@10 -- # set +x 00:16:10.505 [2024-04-26 14:56:56.029117] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:16:10.505 [2024-04-26 14:56:56.029190] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.505 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.505 [2024-04-26 14:56:56.065634] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:10.505 [2024-04-26 14:56:56.097664] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.505 [2024-04-26 14:56:56.185198] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.505 [2024-04-26 14:56:56.185264] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.505 [2024-04-26 14:56:56.185280] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.505 [2024-04-26 14:56:56.185294] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.505 [2024-04-26 14:56:56.185306] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.505 [2024-04-26 14:56:56.185346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.764 14:56:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:10.764 14:56:56 -- common/autotest_common.sh@850 -- # return 0 00:16:10.764 14:56:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:10.764 14:56:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:10.764 14:56:56 -- common/autotest_common.sh@10 -- # set +x 00:16:10.764 14:56:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.764 14:56:56 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:11.022 [2024-04-26 14:56:56.591762] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.022 14:56:56 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:16:11.022 14:56:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:11.022 14:56:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:11.022 14:56:56 -- common/autotest_common.sh@10 -- # set +x 00:16:11.022 ************************************ 00:16:11.022 START TEST lvs_grow_clean 00:16:11.022 ************************************ 00:16:11.022 14:56:56 -- common/autotest_common.sh@1111 -- # lvs_grow 00:16:11.022 14:56:56 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:11.022 14:56:56 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:11.022 14:56:56 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:11.022 14:56:56 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:11.022 14:56:56 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:11.022 14:56:56 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:11.022 14:56:56 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:11.022 14:56:56 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:11.022 14:56:56 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:11.588 14:56:57 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:11.588 14:56:57 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:11.588 14:56:57 -- target/nvmf_lvs_grow.sh@28 -- # lvs=dc344c75-c779-4d4c-b87e-fd746278daaf 00:16:11.588 14:56:57 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc344c75-c779-4d4c-b87e-fd746278daaf 00:16:11.588 14:56:57 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:11.846 14:56:57 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:11.846 14:56:57 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:11.846 14:56:57 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dc344c75-c779-4d4c-b87e-fd746278daaf lvol 150 00:16:12.106 14:56:57 -- target/nvmf_lvs_grow.sh@33 -- # lvol=4bfb784b-2997-4edd-906b-5c6a60bcd1f7 00:16:12.106 14:56:57 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:12.106 14:56:57 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:12.364 [2024-04-26 14:56:58.057354] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:12.364 [2024-04-26 14:56:58.057454] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:12.364 true 00:16:12.364 14:56:58 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc344c75-c779-4d4c-b87e-fd746278daaf 00:16:12.364 14:56:58 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:12.623 14:56:58 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:12.623 14:56:58 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:12.881 14:56:58 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4bfb784b-2997-4edd-906b-5c6a60bcd1f7 00:16:13.139 14:56:58 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:13.398 [2024-04-26 14:56:59.032322] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:13.398 14:56:59 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:13.658 14:56:59 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3760569 00:16:13.658 14:56:59 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:13.658 14:56:59 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:13.658 14:56:59 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3760569 /var/tmp/bdevperf.sock 00:16:13.658 14:56:59 -- common/autotest_common.sh@817 -- # '[' -z 3760569 ']' 00:16:13.658 14:56:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:13.658 14:56:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:13.658 14:56:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:13.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:13.658 14:56:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:13.658 14:56:59 -- common/autotest_common.sh@10 -- # set +x 00:16:13.658 [2024-04-26 14:56:59.337229] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:16:13.658 [2024-04-26 14:56:59.337317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3760569 ] 00:16:13.658 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.659 [2024-04-26 14:56:59.375426] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:13.917 [2024-04-26 14:56:59.403916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.917 [2024-04-26 14:56:59.491841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.917 14:56:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:13.917 14:56:59 -- common/autotest_common.sh@850 -- # return 0 00:16:13.917 14:56:59 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:14.486 Nvme0n1 00:16:14.486 14:57:00 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:14.744 [ 00:16:14.744 { 00:16:14.744 "name": "Nvme0n1", 00:16:14.744 "aliases": [ 00:16:14.744 "4bfb784b-2997-4edd-906b-5c6a60bcd1f7" 00:16:14.744 ], 00:16:14.744 "product_name": "NVMe disk", 00:16:14.744 "block_size": 4096, 00:16:14.744 "num_blocks": 38912, 00:16:14.744 "uuid": "4bfb784b-2997-4edd-906b-5c6a60bcd1f7", 00:16:14.744 "assigned_rate_limits": { 00:16:14.744 "rw_ios_per_sec": 0, 00:16:14.744 "rw_mbytes_per_sec": 0, 00:16:14.744 "r_mbytes_per_sec": 0, 00:16:14.744 "w_mbytes_per_sec": 0 00:16:14.744 }, 00:16:14.744 "claimed": false, 00:16:14.744 "zoned": false, 00:16:14.744 "supported_io_types": { 00:16:14.744 "read": true, 00:16:14.744 "write": true, 00:16:14.744 "unmap": true, 00:16:14.744 "write_zeroes": true, 00:16:14.744 "flush": true, 00:16:14.744 "reset": true, 00:16:14.744 "compare": true, 00:16:14.744 "compare_and_write": true, 00:16:14.744 "abort": true, 00:16:14.744 "nvme_admin": true, 00:16:14.744 "nvme_io": true 00:16:14.744 }, 00:16:14.744 "memory_domains": [ 00:16:14.744 { 00:16:14.744 "dma_device_id": "system", 00:16:14.744 "dma_device_type": 1 00:16:14.744 } 00:16:14.744 ], 00:16:14.744 "driver_specific": { 00:16:14.744 "nvme": [ 00:16:14.744 { 00:16:14.744 "trid": { 00:16:14.744 "trtype": "TCP", 00:16:14.744 "adrfam": "IPv4", 00:16:14.744 "traddr": "10.0.0.2", 00:16:14.744 "trsvcid": "4420", 00:16:14.744 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:14.744 }, 00:16:14.744 "ctrlr_data": { 00:16:14.744 "cntlid": 1, 00:16:14.744 "vendor_id": "0x8086", 00:16:14.744 "model_number": "SPDK bdev Controller", 00:16:14.744 "serial_number": "SPDK0", 00:16:14.744 "firmware_revision": "24.05", 00:16:14.744 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:14.745 "oacs": { 00:16:14.745 "security": 0, 00:16:14.745 "format": 0, 00:16:14.745 "firmware": 0, 00:16:14.745 "ns_manage": 0 00:16:14.745 }, 00:16:14.745 "multi_ctrlr": true, 00:16:14.745 "ana_reporting": false 00:16:14.745 }, 00:16:14.745 "vs": { 00:16:14.745 "nvme_version": "1.3" 00:16:14.745 }, 00:16:14.745 "ns_data": { 00:16:14.745 "id": 1, 00:16:14.745 "can_share": true 00:16:14.745 } 00:16:14.745 } 00:16:14.745 ], 00:16:14.745 "mp_policy": "active_passive" 00:16:14.745 } 00:16:14.745 } 00:16:14.745 ] 00:16:14.745 14:57:00 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3760736 00:16:14.745 14:57:00 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:14.745 14:57:00 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:14.745 Running I/O for 10 seconds... 00:16:16.123 Latency(us) 00:16:16.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:16.123 Nvme0n1 : 1.00 14098.00 55.07 0.00 0.00 0.00 0.00 0.00 00:16:16.123 =================================================================================================================== 00:16:16.123 Total : 14098.00 55.07 0.00 0.00 0.00 0.00 0.00 00:16:16.123 00:16:16.692 14:57:02 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dc344c75-c779-4d4c-b87e-fd746278daaf 00:16:17.011 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:17.011 Nvme0n1 : 2.00 14717.50 57.49 0.00 0.00 0.00 0.00 0.00 00:16:17.011 =================================================================================================================== 00:16:17.011 Total : 14717.50 57.49 0.00 0.00 0.00 0.00 0.00 00:16:17.011 00:16:17.011 true 00:16:17.011 14:57:02 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc344c75-c779-4d4c-b87e-fd746278daaf 00:16:17.011 14:57:02 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:17.330 14:57:02 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:17.330 14:57:02 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:17.330 14:57:02 -- target/nvmf_lvs_grow.sh@65 -- # wait 3760736 00:16:17.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:17.897 Nvme0n1 : 3.00 14908.67 58.24 0.00 0.00 0.00 0.00 0.00 00:16:17.897 =================================================================================================================== 00:16:17.897 Total : 14908.67 58.24 0.00 0.00 0.00 0.00 0.00 00:16:17.897 00:16:18.835 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:18.835 Nvme0n1 : 4.00 14915.50 58.26 0.00 0.00 0.00 0.00 0.00 00:16:18.835 =================================================================================================================== 00:16:18.835 Total : 14915.50 58.26 0.00 0.00 0.00 0.00 0.00 00:16:18.835 00:16:19.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:19.772 Nvme0n1 : 5.00 15066.60 58.85 0.00 0.00 0.00 0.00 0.00 00:16:19.772 =================================================================================================================== 00:16:19.772 Total : 15066.60 58.85 0.00 0.00 0.00 0.00 0.00 00:16:19.772 00:16:21.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:21.153 Nvme0n1 : 6.00 15119.67 59.06 0.00 0.00 0.00 0.00 0.00 00:16:21.153 =================================================================================================================== 00:16:21.153 Total : 15119.67 59.06 0.00 0.00 0.00 0.00 0.00 00:16:21.153 00:16:22.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:22.089 Nvme0n1 : 7.00 15155.86 59.20 0.00 0.00 0.00 0.00 0.00 00:16:22.089 =================================================================================================================== 00:16:22.089 Total : 15155.86 59.20 0.00 0.00 0.00 0.00 0.00 00:16:22.089 00:16:23.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:23.023 Nvme0n1 : 8.00 15103.62 59.00 0.00 0.00 0.00 0.00 0.00 00:16:23.023 =================================================================================================================== 00:16:23.023 Total : 15103.62 59.00 0.00 0.00 0.00 0.00 0.00 00:16:23.023 00:16:23.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:23.957 Nvme0n1 : 9.00 15091.00 58.95 0.00 0.00 0.00 0.00 0.00 00:16:23.957 =================================================================================================================== 00:16:23.957 Total : 15091.00 58.95 0.00 0.00 0.00 0.00 0.00 00:16:23.957 00:16:24.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:24.895 Nvme0n1 : 10.00 15044.00 58.77 0.00 0.00 0.00 0.00 0.00 00:16:24.895 =================================================================================================================== 00:16:24.895 Total : 15044.00 58.77 0.00 0.00 0.00 0.00 0.00 00:16:24.895 00:16:24.895 00:16:24.895 Latency(us) 00:16:24.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:24.895 Nvme0n1 : 10.00 15049.50 58.79 0.00 0.00 8500.68 2354.44 17767.54 00:16:24.895 =================================================================================================================== 00:16:24.895 Total : 15049.50 58.79 0.00 0.00 8500.68 2354.44 17767.54 00:16:24.895 0 00:16:24.895 14:57:10 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3760569 00:16:24.895 14:57:10 -- common/autotest_common.sh@936 -- # '[' -z 3760569 ']' 00:16:24.895 14:57:10 -- common/autotest_common.sh@940 -- # kill -0 3760569 00:16:24.895 14:57:10 -- common/autotest_common.sh@941 -- # uname 00:16:24.895 14:57:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:24.895 14:57:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3760569 00:16:24.895 14:57:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:24.895 14:57:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:24.895 14:57:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3760569' 00:16:24.895 killing process with pid 3760569 00:16:24.895 14:57:10 -- common/autotest_common.sh@955 -- # kill 3760569 00:16:24.895 Received shutdown signal, test time was about 10.000000 seconds 00:16:24.895 00:16:24.895 Latency(us) 00:16:24.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.895 =================================================================================================================== 00:16:24.895 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:24.895 14:57:10 -- common/autotest_common.sh@960 -- # wait 3760569 00:16:25.154 14:57:10 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:25.413 14:57:11 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc344c75-c779-4d4c-b87e-fd746278daaf 00:16:25.413 14:57:11 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:25.670 14:57:11 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:25.670 14:57:11 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:16:25.670 14:57:11 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:25.931 [2024-04-26 14:57:11.570907] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:25.931 14:57:11 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc344c75-c779-4d4c-b87e-fd746278daaf 00:16:25.931 14:57:11 -- common/autotest_common.sh@638 -- # local es=0 00:16:25.931 14:57:11 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc344c75-c779-4d4c-b87e-fd746278daaf 00:16:25.931 14:57:11 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:25.931 14:57:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:25.931 14:57:11 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:25.931 14:57:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:25.931 14:57:11 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:25.931 14:57:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:25.931 14:57:11 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:25.931 14:57:11 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:25.931 14:57:11 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc344c75-c779-4d4c-b87e-fd746278daaf 00:16:26.190 request: 00:16:26.190 { 00:16:26.190 "uuid": "dc344c75-c779-4d4c-b87e-fd746278daaf", 00:16:26.190 "method": "bdev_lvol_get_lvstores", 00:16:26.190 "req_id": 1 00:16:26.190 } 00:16:26.190 Got JSON-RPC error response 00:16:26.190 response: 00:16:26.190 { 00:16:26.190 "code": -19, 00:16:26.190 "message": "No such device" 00:16:26.190 } 00:16:26.190 14:57:11 -- common/autotest_common.sh@641 -- # es=1 00:16:26.190 14:57:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:26.190 14:57:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:26.190 14:57:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:26.190 14:57:11 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:26.449 aio_bdev 00:16:26.449 14:57:12 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 4bfb784b-2997-4edd-906b-5c6a60bcd1f7 00:16:26.449 14:57:12 -- common/autotest_common.sh@885 -- # local bdev_name=4bfb784b-2997-4edd-906b-5c6a60bcd1f7 00:16:26.449 14:57:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:26.449 14:57:12 -- common/autotest_common.sh@887 -- # local i 00:16:26.449 14:57:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:26.449 14:57:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:26.449 14:57:12 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:26.707 14:57:12 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4bfb784b-2997-4edd-906b-5c6a60bcd1f7 -t 2000 00:16:26.965 [ 00:16:26.965 { 00:16:26.965 "name": "4bfb784b-2997-4edd-906b-5c6a60bcd1f7", 00:16:26.965 "aliases": [ 00:16:26.965 "lvs/lvol" 00:16:26.965 ], 00:16:26.965 "product_name": "Logical Volume", 00:16:26.965 "block_size": 4096, 00:16:26.965 "num_blocks": 38912, 00:16:26.965 "uuid": "4bfb784b-2997-4edd-906b-5c6a60bcd1f7", 00:16:26.965 "assigned_rate_limits": { 00:16:26.965 "rw_ios_per_sec": 0, 00:16:26.965 "rw_mbytes_per_sec": 0, 00:16:26.965 "r_mbytes_per_sec": 0, 00:16:26.965 "w_mbytes_per_sec": 0 00:16:26.965 }, 00:16:26.965 "claimed": false, 00:16:26.965 "zoned": false, 00:16:26.965 "supported_io_types": { 00:16:26.965 "read": true, 00:16:26.965 "write": true, 00:16:26.965 "unmap": true, 00:16:26.965 "write_zeroes": true, 00:16:26.965 "flush": false, 00:16:26.965 "reset": true, 00:16:26.965 "compare": false, 00:16:26.965 "compare_and_write": false, 00:16:26.965 "abort": false, 00:16:26.965 "nvme_admin": false, 00:16:26.965 "nvme_io": false 00:16:26.965 }, 00:16:26.965 "driver_specific": { 00:16:26.965 "lvol": { 00:16:26.965 "lvol_store_uuid": "dc344c75-c779-4d4c-b87e-fd746278daaf", 00:16:26.965 "base_bdev": "aio_bdev", 00:16:26.965 "thin_provision": false, 00:16:26.965 "snapshot": false, 00:16:26.965 "clone": false, 00:16:26.965 "esnap_clone": false 00:16:26.965 } 00:16:26.965 } 00:16:26.965 } 00:16:26.965 ] 00:16:26.965 14:57:12 -- common/autotest_common.sh@893 -- # return 0 00:16:26.965 14:57:12 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc344c75-c779-4d4c-b87e-fd746278daaf 00:16:26.965 14:57:12 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:27.222 14:57:12 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:27.222 14:57:12 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc344c75-c779-4d4c-b87e-fd746278daaf 00:16:27.222 14:57:12 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:27.480 14:57:13 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:27.480 14:57:13 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4bfb784b-2997-4edd-906b-5c6a60bcd1f7 00:16:27.737 14:57:13 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dc344c75-c779-4d4c-b87e-fd746278daaf 00:16:27.995 14:57:13 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:28.253 14:57:13 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:28.253 00:16:28.253 real 0m17.098s 00:16:28.253 user 0m16.594s 00:16:28.253 sys 0m1.845s 00:16:28.253 14:57:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:28.253 14:57:13 -- common/autotest_common.sh@10 -- # set +x 00:16:28.253 ************************************ 00:16:28.253 END TEST lvs_grow_clean 00:16:28.253 ************************************ 00:16:28.253 14:57:13 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:28.253 14:57:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:28.253 14:57:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:28.253 14:57:13 -- common/autotest_common.sh@10 -- # set +x 00:16:28.253 ************************************ 00:16:28.253 START TEST lvs_grow_dirty 00:16:28.253 ************************************ 00:16:28.253 14:57:13 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:16:28.253 14:57:13 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:28.253 14:57:13 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:28.253 14:57:13 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:28.253 14:57:13 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:28.253 14:57:13 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:28.253 14:57:13 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:28.253 14:57:13 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:28.253 14:57:13 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:28.253 14:57:13 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:28.511 14:57:14 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:28.511 14:57:14 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:28.769 14:57:14 -- target/nvmf_lvs_grow.sh@28 -- # lvs=28076135-d4c8-4996-992b-0a4177a9d84f 00:16:28.769 14:57:14 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28076135-d4c8-4996-992b-0a4177a9d84f 00:16:28.769 14:57:14 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:29.027 14:57:14 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:29.027 14:57:14 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:29.027 14:57:14 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 28076135-d4c8-4996-992b-0a4177a9d84f lvol 150 00:16:29.285 14:57:14 -- target/nvmf_lvs_grow.sh@33 -- # lvol=db3ff024-2036-49c2-b9df-25e1e853df1f 00:16:29.285 14:57:14 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:29.285 14:57:14 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:29.543 [2024-04-26 14:57:15.227332] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:29.543 [2024-04-26 14:57:15.227436] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:29.543 true 00:16:29.543 14:57:15 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28076135-d4c8-4996-992b-0a4177a9d84f 00:16:29.543 14:57:15 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:29.802 14:57:15 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:29.802 14:57:15 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:30.060 14:57:15 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 db3ff024-2036-49c2-b9df-25e1e853df1f 00:16:30.317 14:57:16 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:30.913 14:57:16 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:30.913 14:57:16 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3763244 00:16:30.913 14:57:16 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:30.913 14:57:16 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:30.913 14:57:16 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3763244 /var/tmp/bdevperf.sock 00:16:30.913 14:57:16 -- common/autotest_common.sh@817 -- # '[' -z 3763244 ']' 00:16:30.913 14:57:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:30.913 14:57:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:30.913 14:57:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:30.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:30.913 14:57:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:30.913 14:57:16 -- common/autotest_common.sh@10 -- # set +x 00:16:30.913 [2024-04-26 14:57:16.621825] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:16:30.913 [2024-04-26 14:57:16.621914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3763244 ] 00:16:31.171 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.171 [2024-04-26 14:57:16.659911] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:31.171 [2024-04-26 14:57:16.689826] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.171 [2024-04-26 14:57:16.779066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.171 14:57:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:31.171 14:57:16 -- common/autotest_common.sh@850 -- # return 0 00:16:31.171 14:57:16 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:31.735 Nvme0n1 00:16:31.735 14:57:17 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:31.992 [ 00:16:31.992 { 00:16:31.992 "name": "Nvme0n1", 00:16:31.992 "aliases": [ 00:16:31.992 "db3ff024-2036-49c2-b9df-25e1e853df1f" 00:16:31.992 ], 00:16:31.992 "product_name": "NVMe disk", 00:16:31.992 "block_size": 4096, 00:16:31.992 "num_blocks": 38912, 00:16:31.992 "uuid": "db3ff024-2036-49c2-b9df-25e1e853df1f", 00:16:31.992 "assigned_rate_limits": { 00:16:31.992 "rw_ios_per_sec": 0, 00:16:31.992 "rw_mbytes_per_sec": 0, 00:16:31.992 "r_mbytes_per_sec": 0, 00:16:31.992 "w_mbytes_per_sec": 0 00:16:31.992 }, 00:16:31.992 "claimed": false, 00:16:31.992 "zoned": false, 00:16:31.992 "supported_io_types": { 00:16:31.992 "read": true, 00:16:31.992 "write": true, 00:16:31.992 "unmap": true, 00:16:31.992 "write_zeroes": true, 00:16:31.992 "flush": true, 00:16:31.992 "reset": true, 00:16:31.992 "compare": true, 00:16:31.992 "compare_and_write": true, 00:16:31.992 "abort": true, 00:16:31.992 "nvme_admin": true, 00:16:31.992 "nvme_io": true 00:16:31.992 }, 00:16:31.992 "memory_domains": [ 00:16:31.992 { 00:16:31.992 "dma_device_id": "system", 00:16:31.992 "dma_device_type": 1 00:16:31.992 } 00:16:31.992 ], 00:16:31.992 "driver_specific": { 00:16:31.992 "nvme": [ 00:16:31.992 { 00:16:31.992 "trid": { 00:16:31.992 "trtype": "TCP", 00:16:31.992 "adrfam": "IPv4", 00:16:31.992 "traddr": "10.0.0.2", 00:16:31.992 "trsvcid": "4420", 00:16:31.992 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:31.992 }, 00:16:31.992 "ctrlr_data": { 00:16:31.992 "cntlid": 1, 00:16:31.992 "vendor_id": "0x8086", 00:16:31.992 "model_number": "SPDK bdev Controller", 00:16:31.992 "serial_number": "SPDK0", 00:16:31.992 "firmware_revision": "24.05", 00:16:31.992 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:31.992 "oacs": { 00:16:31.992 "security": 0, 00:16:31.992 "format": 0, 00:16:31.992 "firmware": 0, 00:16:31.992 "ns_manage": 0 00:16:31.992 }, 00:16:31.992 "multi_ctrlr": true, 00:16:31.992 "ana_reporting": false 00:16:31.992 }, 00:16:31.992 "vs": { 00:16:31.992 "nvme_version": "1.3" 00:16:31.992 }, 00:16:31.992 "ns_data": { 00:16:31.992 "id": 1, 00:16:31.992 "can_share": true 00:16:31.992 } 00:16:31.992 } 00:16:31.992 ], 00:16:31.992 "mp_policy": "active_passive" 00:16:31.992 } 00:16:31.992 } 00:16:31.992 ] 00:16:31.992 14:57:17 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3763378 00:16:31.992 14:57:17 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:31.992 14:57:17 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:31.992 Running I/O for 10 seconds... 00:16:33.365 Latency(us) 00:16:33.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:33.365 Nvme0n1 : 1.00 14239.00 55.62 0.00 0.00 0.00 0.00 0.00 00:16:33.365 =================================================================================================================== 00:16:33.365 Total : 14239.00 55.62 0.00 0.00 0.00 0.00 0.00 00:16:33.365 00:16:33.930 14:57:19 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 28076135-d4c8-4996-992b-0a4177a9d84f 00:16:34.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:34.231 Nvme0n1 : 2.00 14755.00 57.64 0.00 0.00 0.00 0.00 0.00 00:16:34.232 =================================================================================================================== 00:16:34.232 Total : 14755.00 57.64 0.00 0.00 0.00 0.00 0.00 00:16:34.232 00:16:34.232 true 00:16:34.232 14:57:19 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28076135-d4c8-4996-992b-0a4177a9d84f 00:16:34.232 14:57:19 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:34.799 14:57:20 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:34.799 14:57:20 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:34.799 14:57:20 -- target/nvmf_lvs_grow.sh@65 -- # wait 3763378 00:16:35.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:35.057 Nvme0n1 : 3.00 14578.67 56.95 0.00 0.00 0.00 0.00 0.00 00:16:35.057 =================================================================================================================== 00:16:35.057 Total : 14578.67 56.95 0.00 0.00 0.00 0.00 0.00 00:16:35.057 00:16:36.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:36.428 Nvme0n1 : 4.00 14579.50 56.95 0.00 0.00 0.00 0.00 0.00 00:16:36.428 =================================================================================================================== 00:16:36.428 Total : 14579.50 56.95 0.00 0.00 0.00 0.00 0.00 00:16:36.428 00:16:37.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:37.360 Nvme0n1 : 5.00 14786.60 57.76 0.00 0.00 0.00 0.00 0.00 00:16:37.360 =================================================================================================================== 00:16:37.360 Total : 14786.60 57.76 0.00 0.00 0.00 0.00 0.00 00:16:37.360 00:16:38.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:38.293 Nvme0n1 : 6.00 14770.83 57.70 0.00 0.00 0.00 0.00 0.00 00:16:38.293 =================================================================================================================== 00:16:38.293 Total : 14770.83 57.70 0.00 0.00 0.00 0.00 0.00 00:16:38.293 00:16:39.226 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.226 Nvme0n1 : 7.00 14769.43 57.69 0.00 0.00 0.00 0.00 0.00 00:16:39.226 =================================================================================================================== 00:16:39.226 Total : 14769.43 57.69 0.00 0.00 0.00 0.00 0.00 00:16:39.226 00:16:40.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:40.159 Nvme0n1 : 8.00 14884.25 58.14 0.00 0.00 0.00 0.00 0.00 00:16:40.159 =================================================================================================================== 00:16:40.159 Total : 14884.25 58.14 0.00 0.00 0.00 0.00 0.00 00:16:40.159 00:16:41.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:41.091 Nvme0n1 : 9.00 14922.56 58.29 0.00 0.00 0.00 0.00 0.00 00:16:41.091 =================================================================================================================== 00:16:41.091 Total : 14922.56 58.29 0.00 0.00 0.00 0.00 0.00 00:16:41.091 00:16:42.025 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:42.025 Nvme0n1 : 10.00 14984.60 58.53 0.00 0.00 0.00 0.00 0.00 00:16:42.025 =================================================================================================================== 00:16:42.025 Total : 14984.60 58.53 0.00 0.00 0.00 0.00 0.00 00:16:42.025 00:16:42.025 00:16:42.025 Latency(us) 00:16:42.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.025 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:42.025 Nvme0n1 : 10.01 14983.01 58.53 0.00 0.00 8537.03 4636.07 18544.26 00:16:42.025 =================================================================================================================== 00:16:42.025 Total : 14983.01 58.53 0.00 0.00 8537.03 4636.07 18544.26 00:16:42.025 0 00:16:42.283 14:57:27 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3763244 00:16:42.283 14:57:27 -- common/autotest_common.sh@936 -- # '[' -z 3763244 ']' 00:16:42.283 14:57:27 -- common/autotest_common.sh@940 -- # kill -0 3763244 00:16:42.283 14:57:27 -- common/autotest_common.sh@941 -- # uname 00:16:42.283 14:57:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.283 14:57:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3763244 00:16:42.283 14:57:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:42.283 14:57:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:42.283 14:57:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3763244' 00:16:42.283 killing process with pid 3763244 00:16:42.283 14:57:27 -- common/autotest_common.sh@955 -- # kill 3763244 00:16:42.283 Received shutdown signal, test time was about 10.000000 seconds 00:16:42.283 00:16:42.283 Latency(us) 00:16:42.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.283 =================================================================================================================== 00:16:42.283 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:42.283 14:57:27 -- common/autotest_common.sh@960 -- # wait 3763244 00:16:42.540 14:57:28 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:42.797 14:57:28 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28076135-d4c8-4996-992b-0a4177a9d84f 00:16:42.797 14:57:28 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:43.055 14:57:28 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:43.055 14:57:28 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:16:43.055 14:57:28 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3760123 00:16:43.055 14:57:28 -- target/nvmf_lvs_grow.sh@74 -- # wait 3760123 00:16:43.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3760123 Killed "${NVMF_APP[@]}" "$@" 00:16:43.055 14:57:28 -- target/nvmf_lvs_grow.sh@74 -- # true 00:16:43.055 14:57:28 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:16:43.055 14:57:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:43.055 14:57:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:43.055 14:57:28 -- common/autotest_common.sh@10 -- # set +x 00:16:43.055 14:57:28 -- nvmf/common.sh@470 -- # nvmfpid=3764699 00:16:43.055 14:57:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:43.055 14:57:28 -- nvmf/common.sh@471 -- # waitforlisten 3764699 00:16:43.055 14:57:28 -- common/autotest_common.sh@817 -- # '[' -z 3764699 ']' 00:16:43.055 14:57:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.055 14:57:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:43.055 14:57:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.055 14:57:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:43.055 14:57:28 -- common/autotest_common.sh@10 -- # set +x 00:16:43.055 [2024-04-26 14:57:28.647600] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:16:43.055 [2024-04-26 14:57:28.647691] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.055 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.055 [2024-04-26 14:57:28.686974] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:43.055 [2024-04-26 14:57:28.717372] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.313 [2024-04-26 14:57:28.804797] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.313 [2024-04-26 14:57:28.804855] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.313 [2024-04-26 14:57:28.804871] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.313 [2024-04-26 14:57:28.804885] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.313 [2024-04-26 14:57:28.804897] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.313 [2024-04-26 14:57:28.804929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.313 14:57:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:43.313 14:57:28 -- common/autotest_common.sh@850 -- # return 0 00:16:43.313 14:57:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:43.313 14:57:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:43.313 14:57:28 -- common/autotest_common.sh@10 -- # set +x 00:16:43.313 14:57:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.313 14:57:28 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:43.571 [2024-04-26 14:57:29.217076] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:43.571 [2024-04-26 14:57:29.217218] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:43.571 [2024-04-26 14:57:29.217274] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:43.571 14:57:29 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:16:43.571 14:57:29 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev db3ff024-2036-49c2-b9df-25e1e853df1f 00:16:43.571 14:57:29 -- common/autotest_common.sh@885 -- # local bdev_name=db3ff024-2036-49c2-b9df-25e1e853df1f 00:16:43.571 14:57:29 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:43.571 14:57:29 -- common/autotest_common.sh@887 -- # local i 00:16:43.571 14:57:29 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:43.571 14:57:29 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:43.571 14:57:29 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:43.828 14:57:29 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b db3ff024-2036-49c2-b9df-25e1e853df1f -t 2000 00:16:44.085 [ 00:16:44.085 { 00:16:44.085 "name": "db3ff024-2036-49c2-b9df-25e1e853df1f", 00:16:44.085 "aliases": [ 00:16:44.085 "lvs/lvol" 00:16:44.085 ], 00:16:44.085 "product_name": "Logical Volume", 00:16:44.085 "block_size": 4096, 00:16:44.085 "num_blocks": 38912, 00:16:44.085 "uuid": "db3ff024-2036-49c2-b9df-25e1e853df1f", 00:16:44.085 "assigned_rate_limits": { 00:16:44.085 "rw_ios_per_sec": 0, 00:16:44.085 "rw_mbytes_per_sec": 0, 00:16:44.085 "r_mbytes_per_sec": 0, 00:16:44.085 "w_mbytes_per_sec": 0 00:16:44.085 }, 00:16:44.085 "claimed": false, 00:16:44.085 "zoned": false, 00:16:44.085 "supported_io_types": { 00:16:44.085 "read": true, 00:16:44.085 "write": true, 00:16:44.085 "unmap": true, 00:16:44.085 "write_zeroes": true, 00:16:44.085 "flush": false, 00:16:44.085 "reset": true, 00:16:44.085 "compare": false, 00:16:44.085 "compare_and_write": false, 00:16:44.085 "abort": false, 00:16:44.085 "nvme_admin": false, 00:16:44.085 "nvme_io": false 00:16:44.085 }, 00:16:44.085 "driver_specific": { 00:16:44.085 "lvol": { 00:16:44.085 "lvol_store_uuid": "28076135-d4c8-4996-992b-0a4177a9d84f", 00:16:44.085 "base_bdev": "aio_bdev", 00:16:44.085 "thin_provision": false, 00:16:44.085 "snapshot": false, 00:16:44.085 "clone": false, 00:16:44.085 "esnap_clone": false 00:16:44.085 } 00:16:44.085 } 00:16:44.085 } 00:16:44.085 ] 00:16:44.085 14:57:29 -- common/autotest_common.sh@893 -- # return 0 00:16:44.086 14:57:29 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28076135-d4c8-4996-992b-0a4177a9d84f 00:16:44.086 14:57:29 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:16:44.343 14:57:30 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:16:44.343 14:57:30 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28076135-d4c8-4996-992b-0a4177a9d84f 00:16:44.343 14:57:30 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:16:44.599 14:57:30 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:16:44.599 14:57:30 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:44.857 [2024-04-26 14:57:30.514237] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:44.857 14:57:30 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28076135-d4c8-4996-992b-0a4177a9d84f 00:16:44.857 14:57:30 -- common/autotest_common.sh@638 -- # local es=0 00:16:44.857 14:57:30 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28076135-d4c8-4996-992b-0a4177a9d84f 00:16:44.857 14:57:30 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.857 14:57:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:44.857 14:57:30 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.857 14:57:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:44.857 14:57:30 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.857 14:57:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:44.857 14:57:30 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.857 14:57:30 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:44.857 14:57:30 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28076135-d4c8-4996-992b-0a4177a9d84f 00:16:45.142 request: 00:16:45.142 { 00:16:45.142 "uuid": "28076135-d4c8-4996-992b-0a4177a9d84f", 00:16:45.143 "method": "bdev_lvol_get_lvstores", 00:16:45.143 "req_id": 1 00:16:45.143 } 00:16:45.143 Got JSON-RPC error response 00:16:45.143 response: 00:16:45.143 { 00:16:45.143 "code": -19, 00:16:45.143 "message": "No such device" 00:16:45.143 } 00:16:45.143 14:57:30 -- common/autotest_common.sh@641 -- # es=1 00:16:45.143 14:57:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:45.143 14:57:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:45.143 14:57:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:45.143 14:57:30 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:45.404 aio_bdev 00:16:45.404 14:57:31 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev db3ff024-2036-49c2-b9df-25e1e853df1f 00:16:45.404 14:57:31 -- common/autotest_common.sh@885 -- # local bdev_name=db3ff024-2036-49c2-b9df-25e1e853df1f 00:16:45.404 14:57:31 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:45.404 14:57:31 -- common/autotest_common.sh@887 -- # local i 00:16:45.404 14:57:31 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:45.404 14:57:31 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:45.404 14:57:31 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:45.662 14:57:31 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b db3ff024-2036-49c2-b9df-25e1e853df1f -t 2000 00:16:45.919 [ 00:16:45.919 { 00:16:45.919 "name": "db3ff024-2036-49c2-b9df-25e1e853df1f", 00:16:45.919 "aliases": [ 00:16:45.919 "lvs/lvol" 00:16:45.919 ], 00:16:45.919 "product_name": "Logical Volume", 00:16:45.919 "block_size": 4096, 00:16:45.919 "num_blocks": 38912, 00:16:45.919 "uuid": "db3ff024-2036-49c2-b9df-25e1e853df1f", 00:16:45.919 "assigned_rate_limits": { 00:16:45.919 "rw_ios_per_sec": 0, 00:16:45.920 "rw_mbytes_per_sec": 0, 00:16:45.920 "r_mbytes_per_sec": 0, 00:16:45.920 "w_mbytes_per_sec": 0 00:16:45.920 }, 00:16:45.920 "claimed": false, 00:16:45.920 "zoned": false, 00:16:45.920 "supported_io_types": { 00:16:45.920 "read": true, 00:16:45.920 "write": true, 00:16:45.920 "unmap": true, 00:16:45.920 "write_zeroes": true, 00:16:45.920 "flush": false, 00:16:45.920 "reset": true, 00:16:45.920 "compare": false, 00:16:45.920 "compare_and_write": false, 00:16:45.920 "abort": false, 00:16:45.920 "nvme_admin": false, 00:16:45.920 "nvme_io": false 00:16:45.920 }, 00:16:45.920 "driver_specific": { 00:16:45.920 "lvol": { 00:16:45.920 "lvol_store_uuid": "28076135-d4c8-4996-992b-0a4177a9d84f", 00:16:45.920 "base_bdev": "aio_bdev", 00:16:45.920 "thin_provision": false, 00:16:45.920 "snapshot": false, 00:16:45.920 "clone": false, 00:16:45.920 "esnap_clone": false 00:16:45.920 } 00:16:45.920 } 00:16:45.920 } 00:16:45.920 ] 00:16:45.920 14:57:31 -- common/autotest_common.sh@893 -- # return 0 00:16:45.920 14:57:31 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28076135-d4c8-4996-992b-0a4177a9d84f 00:16:45.920 14:57:31 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:46.196 14:57:31 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:46.196 14:57:31 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28076135-d4c8-4996-992b-0a4177a9d84f 00:16:46.196 14:57:31 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:46.454 14:57:32 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:46.454 14:57:32 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete db3ff024-2036-49c2-b9df-25e1e853df1f 00:16:46.712 14:57:32 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 28076135-d4c8-4996-992b-0a4177a9d84f 00:16:46.969 14:57:32 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:47.227 14:57:32 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:47.227 00:16:47.227 real 0m18.969s 00:16:47.227 user 0m47.670s 00:16:47.227 sys 0m4.890s 00:16:47.227 14:57:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:47.227 14:57:32 -- common/autotest_common.sh@10 -- # set +x 00:16:47.227 ************************************ 00:16:47.227 END TEST lvs_grow_dirty 00:16:47.227 ************************************ 00:16:47.227 14:57:32 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:47.227 14:57:32 -- common/autotest_common.sh@794 -- # type=--id 00:16:47.227 14:57:32 -- common/autotest_common.sh@795 -- # id=0 00:16:47.227 14:57:32 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:16:47.227 14:57:32 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:47.227 14:57:32 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:16:47.227 14:57:32 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:16:47.227 14:57:32 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:16:47.227 14:57:32 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:47.227 nvmf_trace.0 00:16:47.227 14:57:32 -- common/autotest_common.sh@809 -- # return 0 00:16:47.227 14:57:32 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:47.227 14:57:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:47.227 14:57:32 -- nvmf/common.sh@117 -- # sync 00:16:47.227 14:57:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:47.227 14:57:32 -- nvmf/common.sh@120 -- # set +e 00:16:47.227 14:57:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:47.227 14:57:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:47.227 rmmod nvme_tcp 00:16:47.525 rmmod nvme_fabrics 00:16:47.525 rmmod nvme_keyring 00:16:47.525 14:57:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:47.525 14:57:33 -- nvmf/common.sh@124 -- # set -e 00:16:47.525 14:57:33 -- nvmf/common.sh@125 -- # return 0 00:16:47.525 14:57:33 -- nvmf/common.sh@478 -- # '[' -n 3764699 ']' 00:16:47.525 14:57:33 -- nvmf/common.sh@479 -- # killprocess 3764699 00:16:47.525 14:57:33 -- common/autotest_common.sh@936 -- # '[' -z 3764699 ']' 00:16:47.525 14:57:33 -- common/autotest_common.sh@940 -- # kill -0 3764699 00:16:47.525 14:57:33 -- common/autotest_common.sh@941 -- # uname 00:16:47.525 14:57:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:47.525 14:57:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3764699 00:16:47.525 14:57:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:47.525 14:57:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:47.525 14:57:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3764699' 00:16:47.525 killing process with pid 3764699 00:16:47.525 14:57:33 -- common/autotest_common.sh@955 -- # kill 3764699 00:16:47.525 14:57:33 -- common/autotest_common.sh@960 -- # wait 3764699 00:16:47.782 14:57:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:47.782 14:57:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:47.782 14:57:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:47.782 14:57:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:47.782 14:57:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:47.782 14:57:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.782 14:57:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.782 14:57:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.681 14:57:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:49.681 00:16:49.681 real 0m41.486s 00:16:49.681 user 1m9.992s 00:16:49.681 sys 0m8.678s 00:16:49.681 14:57:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:49.681 14:57:35 -- common/autotest_common.sh@10 -- # set +x 00:16:49.681 ************************************ 00:16:49.681 END TEST nvmf_lvs_grow 00:16:49.681 ************************************ 00:16:49.681 14:57:35 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:49.681 14:57:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:49.681 14:57:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:49.681 14:57:35 -- common/autotest_common.sh@10 -- # set +x 00:16:49.940 ************************************ 00:16:49.940 START TEST nvmf_bdev_io_wait 00:16:49.940 ************************************ 00:16:49.940 14:57:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:49.940 * Looking for test storage... 00:16:49.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:49.940 14:57:35 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.940 14:57:35 -- nvmf/common.sh@7 -- # uname -s 00:16:49.940 14:57:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.940 14:57:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.940 14:57:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.940 14:57:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.940 14:57:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.940 14:57:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.940 14:57:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.940 14:57:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.940 14:57:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.940 14:57:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.940 14:57:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:49.940 14:57:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:49.940 14:57:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.940 14:57:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.940 14:57:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.940 14:57:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.940 14:57:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:49.940 14:57:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.940 14:57:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.940 14:57:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.940 14:57:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.940 14:57:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.940 14:57:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.940 14:57:35 -- paths/export.sh@5 -- # export PATH 00:16:49.940 14:57:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.940 14:57:35 -- nvmf/common.sh@47 -- # : 0 00:16:49.940 14:57:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:49.940 14:57:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:49.940 14:57:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.940 14:57:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.940 14:57:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.940 14:57:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:49.940 14:57:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:49.940 14:57:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:49.940 14:57:35 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:49.940 14:57:35 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:49.940 14:57:35 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:49.940 14:57:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:49.940 14:57:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.940 14:57:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:49.940 14:57:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:49.940 14:57:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:49.940 14:57:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.940 14:57:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.940 14:57:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.940 14:57:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:49.940 14:57:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:49.940 14:57:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:49.940 14:57:35 -- common/autotest_common.sh@10 -- # set +x 00:16:51.841 14:57:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:51.841 14:57:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:51.841 14:57:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:51.841 14:57:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:51.841 14:57:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:51.841 14:57:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:51.841 14:57:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:51.841 14:57:37 -- nvmf/common.sh@295 -- # net_devs=() 00:16:51.841 14:57:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:51.841 14:57:37 -- nvmf/common.sh@296 -- # e810=() 00:16:51.841 14:57:37 -- nvmf/common.sh@296 -- # local -ga e810 00:16:51.841 14:57:37 -- nvmf/common.sh@297 -- # x722=() 00:16:51.841 14:57:37 -- nvmf/common.sh@297 -- # local -ga x722 00:16:51.841 14:57:37 -- nvmf/common.sh@298 -- # mlx=() 00:16:51.841 14:57:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:51.841 14:57:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.841 14:57:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.841 14:57:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.841 14:57:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.841 14:57:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.841 14:57:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.841 14:57:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.841 14:57:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.841 14:57:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.841 14:57:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.841 14:57:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.841 14:57:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:51.841 14:57:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:51.841 14:57:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:51.841 14:57:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:51.841 14:57:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:51.841 14:57:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:51.841 14:57:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.841 14:57:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:51.841 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:51.841 14:57:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.841 14:57:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.841 14:57:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.841 14:57:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.841 14:57:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.841 14:57:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.841 14:57:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:51.841 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:51.841 14:57:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.841 14:57:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.841 14:57:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.841 14:57:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.841 14:57:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.841 14:57:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:51.841 14:57:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:51.841 14:57:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:51.841 14:57:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.841 14:57:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.841 14:57:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:51.841 14:57:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.841 14:57:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:51.841 Found net devices under 0000:84:00.0: cvl_0_0 00:16:51.841 14:57:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.841 14:57:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.841 14:57:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.841 14:57:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:51.841 14:57:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.841 14:57:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:51.841 Found net devices under 0000:84:00.1: cvl_0_1 00:16:51.841 14:57:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.841 14:57:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:51.841 14:57:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:51.841 14:57:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:51.841 14:57:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:51.841 14:57:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:51.841 14:57:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.841 14:57:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.841 14:57:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:51.841 14:57:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:51.841 14:57:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:51.841 14:57:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:51.841 14:57:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:51.841 14:57:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:51.841 14:57:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.841 14:57:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:51.841 14:57:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:51.841 14:57:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:51.841 14:57:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:51.841 14:57:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:51.841 14:57:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:51.841 14:57:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:51.841 14:57:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:52.100 14:57:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:52.100 14:57:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:52.100 14:57:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:52.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:16:52.100 00:16:52.100 --- 10.0.0.2 ping statistics --- 00:16:52.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.100 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:16:52.100 14:57:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:52.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:16:52.100 00:16:52.100 --- 10.0.0.1 ping statistics --- 00:16:52.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.100 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:16:52.100 14:57:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.100 14:57:37 -- nvmf/common.sh@411 -- # return 0 00:16:52.100 14:57:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:52.100 14:57:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.100 14:57:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:52.100 14:57:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:52.100 14:57:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.100 14:57:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:52.100 14:57:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:52.100 14:57:37 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:52.100 14:57:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:52.100 14:57:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:52.100 14:57:37 -- common/autotest_common.sh@10 -- # set +x 00:16:52.100 14:57:37 -- nvmf/common.sh@470 -- # nvmfpid=3767242 00:16:52.100 14:57:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:52.100 14:57:37 -- nvmf/common.sh@471 -- # waitforlisten 3767242 00:16:52.100 14:57:37 -- common/autotest_common.sh@817 -- # '[' -z 3767242 ']' 00:16:52.100 14:57:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.100 14:57:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:52.100 14:57:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.100 14:57:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:52.100 14:57:37 -- common/autotest_common.sh@10 -- # set +x 00:16:52.100 [2024-04-26 14:57:37.688857] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:16:52.100 [2024-04-26 14:57:37.688935] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.100 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.100 [2024-04-26 14:57:37.727498] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:52.100 [2024-04-26 14:57:37.753869] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:52.100 [2024-04-26 14:57:37.839104] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.100 [2024-04-26 14:57:37.839162] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.100 [2024-04-26 14:57:37.839176] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.100 [2024-04-26 14:57:37.839188] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.100 [2024-04-26 14:57:37.839199] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.100 [2024-04-26 14:57:37.839248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.100 [2024-04-26 14:57:37.839272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.100 [2024-04-26 14:57:37.839322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:52.100 [2024-04-26 14:57:37.839324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.359 14:57:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:52.359 14:57:37 -- common/autotest_common.sh@850 -- # return 0 00:16:52.359 14:57:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:52.359 14:57:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:52.359 14:57:37 -- common/autotest_common.sh@10 -- # set +x 00:16:52.359 14:57:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.359 14:57:37 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:52.359 14:57:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.359 14:57:37 -- common/autotest_common.sh@10 -- # set +x 00:16:52.359 14:57:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.359 14:57:37 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:52.359 14:57:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.359 14:57:37 -- common/autotest_common.sh@10 -- # set +x 00:16:52.359 14:57:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.359 14:57:37 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:52.359 14:57:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.359 14:57:37 -- common/autotest_common.sh@10 -- # set +x 00:16:52.359 [2024-04-26 14:57:37.988249] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.359 14:57:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.359 14:57:37 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:52.359 14:57:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.359 14:57:37 -- common/autotest_common.sh@10 -- # set +x 00:16:52.359 Malloc0 00:16:52.359 14:57:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.359 14:57:38 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:52.359 14:57:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.359 14:57:38 -- common/autotest_common.sh@10 -- # set +x 00:16:52.359 14:57:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.359 14:57:38 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:52.359 14:57:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.359 14:57:38 -- common/autotest_common.sh@10 -- # set +x 00:16:52.359 14:57:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.359 14:57:38 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:52.359 14:57:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.359 14:57:38 -- common/autotest_common.sh@10 -- # set +x 00:16:52.359 [2024-04-26 14:57:38.046335] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.359 14:57:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.359 14:57:38 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3767264 00:16:52.359 14:57:38 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:52.359 14:57:38 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:52.359 14:57:38 -- target/bdev_io_wait.sh@30 -- # READ_PID=3767266 00:16:52.359 14:57:38 -- nvmf/common.sh@521 -- # config=() 00:16:52.359 14:57:38 -- nvmf/common.sh@521 -- # local subsystem config 00:16:52.359 14:57:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:52.359 14:57:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:52.359 { 00:16:52.359 "params": { 00:16:52.359 "name": "Nvme$subsystem", 00:16:52.359 "trtype": "$TEST_TRANSPORT", 00:16:52.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.359 "adrfam": "ipv4", 00:16:52.359 "trsvcid": "$NVMF_PORT", 00:16:52.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.359 "hdgst": ${hdgst:-false}, 00:16:52.359 "ddgst": ${ddgst:-false} 00:16:52.359 }, 00:16:52.359 "method": "bdev_nvme_attach_controller" 00:16:52.359 } 00:16:52.359 EOF 00:16:52.359 )") 00:16:52.359 14:57:38 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:52.359 14:57:38 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:52.359 14:57:38 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3767268 00:16:52.359 14:57:38 -- nvmf/common.sh@521 -- # config=() 00:16:52.359 14:57:38 -- nvmf/common.sh@521 -- # local subsystem config 00:16:52.359 14:57:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:52.359 14:57:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:52.359 { 00:16:52.359 "params": { 00:16:52.359 "name": "Nvme$subsystem", 00:16:52.359 "trtype": "$TEST_TRANSPORT", 00:16:52.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.359 "adrfam": "ipv4", 00:16:52.359 "trsvcid": "$NVMF_PORT", 00:16:52.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.359 "hdgst": ${hdgst:-false}, 00:16:52.359 "ddgst": ${ddgst:-false} 00:16:52.359 }, 00:16:52.359 "method": "bdev_nvme_attach_controller" 00:16:52.359 } 00:16:52.359 EOF 00:16:52.359 )") 00:16:52.359 14:57:38 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:52.359 14:57:38 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:52.359 14:57:38 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3767271 00:16:52.359 14:57:38 -- nvmf/common.sh@543 -- # cat 00:16:52.359 14:57:38 -- target/bdev_io_wait.sh@35 -- # sync 00:16:52.359 14:57:38 -- nvmf/common.sh@521 -- # config=() 00:16:52.359 14:57:38 -- nvmf/common.sh@521 -- # local subsystem config 00:16:52.359 14:57:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:52.359 14:57:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:52.359 { 00:16:52.359 "params": { 00:16:52.359 "name": "Nvme$subsystem", 00:16:52.359 "trtype": "$TEST_TRANSPORT", 00:16:52.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.359 "adrfam": "ipv4", 00:16:52.359 "trsvcid": "$NVMF_PORT", 00:16:52.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.359 "hdgst": ${hdgst:-false}, 00:16:52.359 "ddgst": ${ddgst:-false} 00:16:52.359 }, 00:16:52.359 "method": "bdev_nvme_attach_controller" 00:16:52.359 } 00:16:52.359 EOF 00:16:52.359 )") 00:16:52.359 14:57:38 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:52.359 14:57:38 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:52.359 14:57:38 -- nvmf/common.sh@521 -- # config=() 00:16:52.359 14:57:38 -- nvmf/common.sh@543 -- # cat 00:16:52.359 14:57:38 -- nvmf/common.sh@521 -- # local subsystem config 00:16:52.359 14:57:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:52.359 14:57:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:52.359 { 00:16:52.359 "params": { 00:16:52.359 "name": "Nvme$subsystem", 00:16:52.359 "trtype": "$TEST_TRANSPORT", 00:16:52.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.359 "adrfam": "ipv4", 00:16:52.359 "trsvcid": "$NVMF_PORT", 00:16:52.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.359 "hdgst": ${hdgst:-false}, 00:16:52.359 "ddgst": ${ddgst:-false} 00:16:52.359 }, 00:16:52.359 "method": "bdev_nvme_attach_controller" 00:16:52.359 } 00:16:52.359 EOF 00:16:52.359 )") 00:16:52.359 14:57:38 -- nvmf/common.sh@543 -- # cat 00:16:52.359 14:57:38 -- target/bdev_io_wait.sh@37 -- # wait 3767264 00:16:52.359 14:57:38 -- nvmf/common.sh@543 -- # cat 00:16:52.359 14:57:38 -- nvmf/common.sh@545 -- # jq . 00:16:52.359 14:57:38 -- nvmf/common.sh@545 -- # jq . 00:16:52.359 14:57:38 -- nvmf/common.sh@546 -- # IFS=, 00:16:52.359 14:57:38 -- nvmf/common.sh@545 -- # jq . 00:16:52.359 14:57:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:52.359 "params": { 00:16:52.359 "name": "Nvme1", 00:16:52.359 "trtype": "tcp", 00:16:52.359 "traddr": "10.0.0.2", 00:16:52.359 "adrfam": "ipv4", 00:16:52.359 "trsvcid": "4420", 00:16:52.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:52.359 "hdgst": false, 00:16:52.359 "ddgst": false 00:16:52.359 }, 00:16:52.359 "method": "bdev_nvme_attach_controller" 00:16:52.359 }' 00:16:52.359 14:57:38 -- nvmf/common.sh@546 -- # IFS=, 00:16:52.359 14:57:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:52.359 "params": { 00:16:52.359 "name": "Nvme1", 00:16:52.359 "trtype": "tcp", 00:16:52.359 "traddr": "10.0.0.2", 00:16:52.359 "adrfam": "ipv4", 00:16:52.359 "trsvcid": "4420", 00:16:52.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:52.359 "hdgst": false, 00:16:52.359 "ddgst": false 00:16:52.359 }, 00:16:52.359 "method": "bdev_nvme_attach_controller" 00:16:52.359 }' 00:16:52.359 14:57:38 -- nvmf/common.sh@545 -- # jq . 00:16:52.359 14:57:38 -- nvmf/common.sh@546 -- # IFS=, 00:16:52.359 14:57:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:52.359 "params": { 00:16:52.359 "name": "Nvme1", 00:16:52.359 "trtype": "tcp", 00:16:52.359 "traddr": "10.0.0.2", 00:16:52.359 "adrfam": "ipv4", 00:16:52.359 "trsvcid": "4420", 00:16:52.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:52.359 "hdgst": false, 00:16:52.359 "ddgst": false 00:16:52.359 }, 00:16:52.359 "method": "bdev_nvme_attach_controller" 00:16:52.359 }' 00:16:52.359 14:57:38 -- nvmf/common.sh@546 -- # IFS=, 00:16:52.359 14:57:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:52.359 "params": { 00:16:52.359 "name": "Nvme1", 00:16:52.359 "trtype": "tcp", 00:16:52.359 "traddr": "10.0.0.2", 00:16:52.359 "adrfam": "ipv4", 00:16:52.359 "trsvcid": "4420", 00:16:52.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:52.359 "hdgst": false, 00:16:52.359 "ddgst": false 00:16:52.359 }, 00:16:52.359 "method": "bdev_nvme_attach_controller" 00:16:52.359 }' 00:16:52.359 [2024-04-26 14:57:38.092380] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:16:52.359 [2024-04-26 14:57:38.092382] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:16:52.359 [2024-04-26 14:57:38.092463] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-26 14:57:38.092464] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:52.359 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:52.359 [2024-04-26 14:57:38.092979] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:16:52.359 [2024-04-26 14:57:38.092994] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:16:52.359 [2024-04-26 14:57:38.093063] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:52.359 [2024-04-26 14:57:38.093069] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:52.618 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.618 [2024-04-26 14:57:38.240617] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:52.618 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.618 [2024-04-26 14:57:38.268707] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.618 [2024-04-26 14:57:38.342051] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:52.618 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.618 [2024-04-26 14:57:38.344829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:52.876 [2024-04-26 14:57:38.373205] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.876 [2024-04-26 14:57:38.413534] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:52.876 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.876 [2024-04-26 14:57:38.444062] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.876 [2024-04-26 14:57:38.447751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:52.876 [2024-04-26 14:57:38.483598] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:52.876 [2024-04-26 14:57:38.511722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:52.876 [2024-04-26 14:57:38.514323] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.876 [2024-04-26 14:57:38.584408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:16:53.134 Running I/O for 1 seconds... 00:16:53.134 Running I/O for 1 seconds... 00:16:53.134 Running I/O for 1 seconds... 00:16:53.392 Running I/O for 1 seconds... 00:16:53.957 00:16:53.957 Latency(us) 00:16:53.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.957 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:53.957 Nvme1n1 : 1.02 6566.05 25.65 0.00 0.00 19343.17 9272.13 32428.18 00:16:53.957 =================================================================================================================== 00:16:53.957 Total : 6566.05 25.65 0.00 0.00 19343.17 9272.13 32428.18 00:16:54.215 00:16:54.215 Latency(us) 00:16:54.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.215 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:54.215 Nvme1n1 : 1.00 205602.73 803.14 0.00 0.00 619.86 245.76 752.45 00:16:54.215 =================================================================================================================== 00:16:54.215 Total : 205602.73 803.14 0.00 0.00 619.86 245.76 752.45 00:16:54.215 00:16:54.215 Latency(us) 00:16:54.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.215 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:54.215 Nvme1n1 : 1.01 6333.12 24.74 0.00 0.00 20137.98 6359.42 40195.41 00:16:54.215 =================================================================================================================== 00:16:54.215 Total : 6333.12 24.74 0.00 0.00 20137.98 6359.42 40195.41 00:16:54.215 00:16:54.215 Latency(us) 00:16:54.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.215 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:54.215 Nvme1n1 : 1.01 9788.91 38.24 0.00 0.00 13024.21 6699.24 25243.50 00:16:54.215 =================================================================================================================== 00:16:54.215 Total : 9788.91 38.24 0.00 0.00 13024.21 6699.24 25243.50 00:16:54.474 14:57:40 -- target/bdev_io_wait.sh@38 -- # wait 3767266 00:16:54.474 14:57:40 -- target/bdev_io_wait.sh@39 -- # wait 3767268 00:16:54.474 14:57:40 -- target/bdev_io_wait.sh@40 -- # wait 3767271 00:16:54.474 14:57:40 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.474 14:57:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.474 14:57:40 -- common/autotest_common.sh@10 -- # set +x 00:16:54.474 14:57:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.474 14:57:40 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:54.474 14:57:40 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:54.474 14:57:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:54.474 14:57:40 -- nvmf/common.sh@117 -- # sync 00:16:54.474 14:57:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:54.474 14:57:40 -- nvmf/common.sh@120 -- # set +e 00:16:54.474 14:57:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:54.474 14:57:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:54.474 rmmod nvme_tcp 00:16:54.474 rmmod nvme_fabrics 00:16:54.474 rmmod nvme_keyring 00:16:54.732 14:57:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:54.732 14:57:40 -- nvmf/common.sh@124 -- # set -e 00:16:54.732 14:57:40 -- nvmf/common.sh@125 -- # return 0 00:16:54.732 14:57:40 -- nvmf/common.sh@478 -- # '[' -n 3767242 ']' 00:16:54.732 14:57:40 -- nvmf/common.sh@479 -- # killprocess 3767242 00:16:54.732 14:57:40 -- common/autotest_common.sh@936 -- # '[' -z 3767242 ']' 00:16:54.732 14:57:40 -- common/autotest_common.sh@940 -- # kill -0 3767242 00:16:54.732 14:57:40 -- common/autotest_common.sh@941 -- # uname 00:16:54.732 14:57:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:54.732 14:57:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3767242 00:16:54.732 14:57:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:54.732 14:57:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:54.732 14:57:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3767242' 00:16:54.732 killing process with pid 3767242 00:16:54.732 14:57:40 -- common/autotest_common.sh@955 -- # kill 3767242 00:16:54.732 14:57:40 -- common/autotest_common.sh@960 -- # wait 3767242 00:16:54.990 14:57:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:54.990 14:57:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:54.990 14:57:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:54.990 14:57:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:54.990 14:57:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:54.990 14:57:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.990 14:57:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.990 14:57:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.892 14:57:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:56.892 00:16:56.892 real 0m7.074s 00:16:56.892 user 0m16.492s 00:16:56.892 sys 0m3.330s 00:16:56.892 14:57:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:56.892 14:57:42 -- common/autotest_common.sh@10 -- # set +x 00:16:56.892 ************************************ 00:16:56.892 END TEST nvmf_bdev_io_wait 00:16:56.892 ************************************ 00:16:56.892 14:57:42 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:56.892 14:57:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:56.892 14:57:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:56.892 14:57:42 -- common/autotest_common.sh@10 -- # set +x 00:16:57.150 ************************************ 00:16:57.150 START TEST nvmf_queue_depth 00:16:57.150 ************************************ 00:16:57.150 14:57:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:57.150 * Looking for test storage... 00:16:57.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:57.150 14:57:42 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.150 14:57:42 -- nvmf/common.sh@7 -- # uname -s 00:16:57.150 14:57:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.150 14:57:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.150 14:57:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.150 14:57:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.150 14:57:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.150 14:57:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.150 14:57:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.150 14:57:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.150 14:57:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.150 14:57:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.150 14:57:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:57.150 14:57:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:57.150 14:57:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.150 14:57:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.150 14:57:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.150 14:57:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.150 14:57:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:57.150 14:57:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.150 14:57:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.150 14:57:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.150 14:57:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.150 14:57:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.150 14:57:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.150 14:57:42 -- paths/export.sh@5 -- # export PATH 00:16:57.150 14:57:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.150 14:57:42 -- nvmf/common.sh@47 -- # : 0 00:16:57.150 14:57:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:57.150 14:57:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:57.150 14:57:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.150 14:57:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.150 14:57:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.150 14:57:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:57.150 14:57:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:57.150 14:57:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:57.150 14:57:42 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:57.150 14:57:42 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:57.150 14:57:42 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:57.150 14:57:42 -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:57.150 14:57:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:57.150 14:57:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.150 14:57:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:57.150 14:57:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:57.150 14:57:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:57.150 14:57:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.150 14:57:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.150 14:57:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.150 14:57:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:57.150 14:57:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:57.150 14:57:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:57.150 14:57:42 -- common/autotest_common.sh@10 -- # set +x 00:16:59.049 14:57:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:59.049 14:57:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:59.049 14:57:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:59.049 14:57:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:59.049 14:57:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:59.049 14:57:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:59.049 14:57:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:59.049 14:57:44 -- nvmf/common.sh@295 -- # net_devs=() 00:16:59.049 14:57:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:59.049 14:57:44 -- nvmf/common.sh@296 -- # e810=() 00:16:59.049 14:57:44 -- nvmf/common.sh@296 -- # local -ga e810 00:16:59.049 14:57:44 -- nvmf/common.sh@297 -- # x722=() 00:16:59.049 14:57:44 -- nvmf/common.sh@297 -- # local -ga x722 00:16:59.049 14:57:44 -- nvmf/common.sh@298 -- # mlx=() 00:16:59.049 14:57:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:59.049 14:57:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:59.049 14:57:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:59.049 14:57:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:59.049 14:57:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:59.049 14:57:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:59.049 14:57:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:59.049 14:57:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:59.049 14:57:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:59.049 14:57:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:59.049 14:57:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:59.049 14:57:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:59.049 14:57:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:59.049 14:57:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:59.049 14:57:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:59.049 14:57:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:59.049 14:57:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:59.049 14:57:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:59.049 14:57:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:59.049 14:57:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:59.049 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:59.049 14:57:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:59.049 14:57:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:59.049 14:57:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.049 14:57:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.049 14:57:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:59.049 14:57:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:59.049 14:57:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:59.049 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:59.049 14:57:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:59.049 14:57:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:59.049 14:57:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.049 14:57:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.049 14:57:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:59.049 14:57:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:59.049 14:57:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:59.049 14:57:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:59.049 14:57:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:59.049 14:57:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.049 14:57:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:59.049 14:57:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.049 14:57:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:59.049 Found net devices under 0000:84:00.0: cvl_0_0 00:16:59.049 14:57:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.049 14:57:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:59.049 14:57:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.049 14:57:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:59.049 14:57:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.049 14:57:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:59.049 Found net devices under 0000:84:00.1: cvl_0_1 00:16:59.049 14:57:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.049 14:57:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:59.049 14:57:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:59.049 14:57:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:59.049 14:57:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:59.049 14:57:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:59.049 14:57:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:59.049 14:57:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:59.049 14:57:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:59.049 14:57:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:59.049 14:57:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:59.049 14:57:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:59.049 14:57:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:59.049 14:57:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:59.049 14:57:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:59.049 14:57:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:59.049 14:57:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:59.049 14:57:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:59.049 14:57:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:59.308 14:57:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:59.308 14:57:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:59.308 14:57:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:59.308 14:57:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:59.308 14:57:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:59.308 14:57:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:59.308 14:57:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:59.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:59.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:16:59.308 00:16:59.308 --- 10.0.0.2 ping statistics --- 00:16:59.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.308 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:16:59.308 14:57:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:59.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:59.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:16:59.308 00:16:59.308 --- 10.0.0.1 ping statistics --- 00:16:59.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.308 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:16:59.308 14:57:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:59.308 14:57:44 -- nvmf/common.sh@411 -- # return 0 00:16:59.308 14:57:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:59.308 14:57:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:59.308 14:57:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:59.308 14:57:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:59.308 14:57:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:59.308 14:57:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:59.308 14:57:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:59.308 14:57:44 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:59.308 14:57:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:59.308 14:57:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:59.308 14:57:44 -- common/autotest_common.sh@10 -- # set +x 00:16:59.308 14:57:44 -- nvmf/common.sh@470 -- # nvmfpid=3769511 00:16:59.308 14:57:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:59.308 14:57:44 -- nvmf/common.sh@471 -- # waitforlisten 3769511 00:16:59.308 14:57:44 -- common/autotest_common.sh@817 -- # '[' -z 3769511 ']' 00:16:59.308 14:57:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.308 14:57:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:59.308 14:57:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.308 14:57:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:59.308 14:57:44 -- common/autotest_common.sh@10 -- # set +x 00:16:59.308 [2024-04-26 14:57:44.935672] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:16:59.308 [2024-04-26 14:57:44.935758] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.308 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.308 [2024-04-26 14:57:44.974744] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:59.308 [2024-04-26 14:57:45.001225] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.566 [2024-04-26 14:57:45.089157] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.566 [2024-04-26 14:57:45.089232] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.566 [2024-04-26 14:57:45.089262] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.566 [2024-04-26 14:57:45.089275] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.566 [2024-04-26 14:57:45.089286] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.566 [2024-04-26 14:57:45.089331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.566 14:57:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:59.566 14:57:45 -- common/autotest_common.sh@850 -- # return 0 00:16:59.566 14:57:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:59.566 14:57:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:59.566 14:57:45 -- common/autotest_common.sh@10 -- # set +x 00:16:59.566 14:57:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.566 14:57:45 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:59.566 14:57:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.566 14:57:45 -- common/autotest_common.sh@10 -- # set +x 00:16:59.566 [2024-04-26 14:57:45.229000] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.566 14:57:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.566 14:57:45 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:59.566 14:57:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.566 14:57:45 -- common/autotest_common.sh@10 -- # set +x 00:16:59.566 Malloc0 00:16:59.566 14:57:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.566 14:57:45 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:59.566 14:57:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.566 14:57:45 -- common/autotest_common.sh@10 -- # set +x 00:16:59.566 14:57:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.566 14:57:45 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:59.566 14:57:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.566 14:57:45 -- common/autotest_common.sh@10 -- # set +x 00:16:59.566 14:57:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.566 14:57:45 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.566 14:57:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.566 14:57:45 -- common/autotest_common.sh@10 -- # set +x 00:16:59.566 [2024-04-26 14:57:45.291184] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.566 14:57:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.566 14:57:45 -- target/queue_depth.sh@30 -- # bdevperf_pid=3769645 00:16:59.566 14:57:45 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:59.566 14:57:45 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:59.566 14:57:45 -- target/queue_depth.sh@33 -- # waitforlisten 3769645 /var/tmp/bdevperf.sock 00:16:59.566 14:57:45 -- common/autotest_common.sh@817 -- # '[' -z 3769645 ']' 00:16:59.566 14:57:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:59.566 14:57:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:59.566 14:57:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:59.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:59.566 14:57:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:59.566 14:57:45 -- common/autotest_common.sh@10 -- # set +x 00:16:59.825 [2024-04-26 14:57:45.337359] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:16:59.825 [2024-04-26 14:57:45.337439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3769645 ] 00:16:59.825 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.825 [2024-04-26 14:57:45.370350] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:59.825 [2024-04-26 14:57:45.400823] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.825 [2024-04-26 14:57:45.490125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.124 14:57:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:00.124 14:57:45 -- common/autotest_common.sh@850 -- # return 0 00:17:00.124 14:57:45 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:00.124 14:57:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.124 14:57:45 -- common/autotest_common.sh@10 -- # set +x 00:17:00.124 NVMe0n1 00:17:00.124 14:57:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.124 14:57:45 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:00.382 Running I/O for 10 seconds... 00:17:10.343 00:17:10.343 Latency(us) 00:17:10.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.343 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:10.343 Verification LBA range: start 0x0 length 0x4000 00:17:10.343 NVMe0n1 : 10.09 8524.00 33.30 0.00 0.00 119635.10 21651.15 78837.38 00:17:10.343 =================================================================================================================== 00:17:10.343 Total : 8524.00 33.30 0.00 0.00 119635.10 21651.15 78837.38 00:17:10.343 0 00:17:10.343 14:57:55 -- target/queue_depth.sh@39 -- # killprocess 3769645 00:17:10.343 14:57:55 -- common/autotest_common.sh@936 -- # '[' -z 3769645 ']' 00:17:10.343 14:57:55 -- common/autotest_common.sh@940 -- # kill -0 3769645 00:17:10.343 14:57:55 -- common/autotest_common.sh@941 -- # uname 00:17:10.343 14:57:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:10.343 14:57:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3769645 00:17:10.343 14:57:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:10.343 14:57:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:10.343 14:57:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3769645' 00:17:10.343 killing process with pid 3769645 00:17:10.343 14:57:56 -- common/autotest_common.sh@955 -- # kill 3769645 00:17:10.343 Received shutdown signal, test time was about 10.000000 seconds 00:17:10.343 00:17:10.343 Latency(us) 00:17:10.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.343 =================================================================================================================== 00:17:10.343 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:10.343 14:57:56 -- common/autotest_common.sh@960 -- # wait 3769645 00:17:10.600 14:57:56 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:10.600 14:57:56 -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:10.600 14:57:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:10.600 14:57:56 -- nvmf/common.sh@117 -- # sync 00:17:10.600 14:57:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:10.600 14:57:56 -- nvmf/common.sh@120 -- # set +e 00:17:10.600 14:57:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:10.600 14:57:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:10.600 rmmod nvme_tcp 00:17:10.600 rmmod nvme_fabrics 00:17:10.600 rmmod nvme_keyring 00:17:10.600 14:57:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:10.600 14:57:56 -- nvmf/common.sh@124 -- # set -e 00:17:10.600 14:57:56 -- nvmf/common.sh@125 -- # return 0 00:17:10.600 14:57:56 -- nvmf/common.sh@478 -- # '[' -n 3769511 ']' 00:17:10.600 14:57:56 -- nvmf/common.sh@479 -- # killprocess 3769511 00:17:10.600 14:57:56 -- common/autotest_common.sh@936 -- # '[' -z 3769511 ']' 00:17:10.600 14:57:56 -- common/autotest_common.sh@940 -- # kill -0 3769511 00:17:10.600 14:57:56 -- common/autotest_common.sh@941 -- # uname 00:17:10.600 14:57:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:10.600 14:57:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3769511 00:17:10.858 14:57:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:10.858 14:57:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:10.858 14:57:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3769511' 00:17:10.858 killing process with pid 3769511 00:17:10.858 14:57:56 -- common/autotest_common.sh@955 -- # kill 3769511 00:17:10.858 14:57:56 -- common/autotest_common.sh@960 -- # wait 3769511 00:17:11.115 14:57:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:11.115 14:57:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:11.115 14:57:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:11.115 14:57:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:11.115 14:57:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:11.115 14:57:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.115 14:57:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.115 14:57:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.013 14:57:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:13.013 00:17:13.013 real 0m16.004s 00:17:13.013 user 0m22.164s 00:17:13.013 sys 0m3.342s 00:17:13.013 14:57:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:13.013 14:57:58 -- common/autotest_common.sh@10 -- # set +x 00:17:13.013 ************************************ 00:17:13.013 END TEST nvmf_queue_depth 00:17:13.013 ************************************ 00:17:13.013 14:57:58 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:13.013 14:57:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:13.013 14:57:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:13.013 14:57:58 -- common/autotest_common.sh@10 -- # set +x 00:17:13.271 ************************************ 00:17:13.271 START TEST nvmf_multipath 00:17:13.271 ************************************ 00:17:13.271 14:57:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:13.271 * Looking for test storage... 00:17:13.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:13.271 14:57:58 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:13.271 14:57:58 -- nvmf/common.sh@7 -- # uname -s 00:17:13.271 14:57:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.271 14:57:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.271 14:57:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.271 14:57:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.271 14:57:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.271 14:57:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.271 14:57:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.271 14:57:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.271 14:57:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.271 14:57:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.271 14:57:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:13.271 14:57:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:13.271 14:57:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.271 14:57:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.271 14:57:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:13.271 14:57:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.271 14:57:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:13.271 14:57:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.271 14:57:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.271 14:57:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.271 14:57:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.271 14:57:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.271 14:57:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.271 14:57:58 -- paths/export.sh@5 -- # export PATH 00:17:13.271 14:57:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.271 14:57:58 -- nvmf/common.sh@47 -- # : 0 00:17:13.271 14:57:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:13.271 14:57:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:13.271 14:57:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.271 14:57:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.271 14:57:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.271 14:57:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:13.271 14:57:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:13.271 14:57:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:13.271 14:57:58 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:13.271 14:57:58 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:13.271 14:57:58 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:13.271 14:57:58 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:13.271 14:57:58 -- target/multipath.sh@43 -- # nvmftestinit 00:17:13.271 14:57:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:13.271 14:57:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.271 14:57:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:13.271 14:57:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:13.271 14:57:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:13.271 14:57:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.271 14:57:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.271 14:57:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.271 14:57:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:13.271 14:57:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:13.271 14:57:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:13.271 14:57:58 -- common/autotest_common.sh@10 -- # set +x 00:17:15.168 14:58:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:15.168 14:58:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:15.168 14:58:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:15.168 14:58:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:15.168 14:58:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:15.168 14:58:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:15.168 14:58:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:15.168 14:58:00 -- nvmf/common.sh@295 -- # net_devs=() 00:17:15.168 14:58:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:15.168 14:58:00 -- nvmf/common.sh@296 -- # e810=() 00:17:15.168 14:58:00 -- nvmf/common.sh@296 -- # local -ga e810 00:17:15.168 14:58:00 -- nvmf/common.sh@297 -- # x722=() 00:17:15.168 14:58:00 -- nvmf/common.sh@297 -- # local -ga x722 00:17:15.168 14:58:00 -- nvmf/common.sh@298 -- # mlx=() 00:17:15.168 14:58:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:15.168 14:58:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:15.168 14:58:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:15.168 14:58:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:15.168 14:58:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:15.168 14:58:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:15.168 14:58:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:15.168 14:58:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:15.169 14:58:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:15.169 14:58:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:15.169 14:58:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:15.169 14:58:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:15.169 14:58:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:15.169 14:58:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:15.169 14:58:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:15.169 14:58:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:15.169 14:58:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:15.169 14:58:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:15.169 14:58:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:15.169 14:58:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:15.169 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:15.169 14:58:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:15.169 14:58:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:15.169 14:58:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.169 14:58:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.169 14:58:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:15.169 14:58:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:15.169 14:58:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:15.169 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:15.169 14:58:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:15.169 14:58:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:15.169 14:58:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.169 14:58:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.169 14:58:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:15.169 14:58:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:15.169 14:58:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:15.169 14:58:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:15.169 14:58:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:15.169 14:58:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.169 14:58:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:15.169 14:58:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.169 14:58:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:15.169 Found net devices under 0000:84:00.0: cvl_0_0 00:17:15.169 14:58:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.169 14:58:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:15.169 14:58:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.169 14:58:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:15.169 14:58:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.169 14:58:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:15.169 Found net devices under 0000:84:00.1: cvl_0_1 00:17:15.169 14:58:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.169 14:58:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:15.169 14:58:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:15.169 14:58:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:15.169 14:58:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:15.169 14:58:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:15.169 14:58:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:15.169 14:58:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:15.169 14:58:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:15.169 14:58:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:15.169 14:58:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:15.169 14:58:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:15.169 14:58:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:15.169 14:58:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:15.169 14:58:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:15.169 14:58:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:15.169 14:58:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:15.169 14:58:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:15.169 14:58:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:15.426 14:58:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:15.426 14:58:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:15.426 14:58:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:15.426 14:58:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:15.426 14:58:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:15.426 14:58:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:15.426 14:58:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:15.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:17:15.426 00:17:15.426 --- 10.0.0.2 ping statistics --- 00:17:15.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.426 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:17:15.426 14:58:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:15.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:15.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:17:15.426 00:17:15.426 --- 10.0.0.1 ping statistics --- 00:17:15.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.426 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:17:15.427 14:58:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:15.427 14:58:01 -- nvmf/common.sh@411 -- # return 0 00:17:15.427 14:58:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:15.427 14:58:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:15.427 14:58:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:15.427 14:58:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:15.427 14:58:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:15.427 14:58:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:15.427 14:58:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:15.427 14:58:01 -- target/multipath.sh@45 -- # '[' -z ']' 00:17:15.427 14:58:01 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:15.427 only one NIC for nvmf test 00:17:15.427 14:58:01 -- target/multipath.sh@47 -- # nvmftestfini 00:17:15.427 14:58:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:15.427 14:58:01 -- nvmf/common.sh@117 -- # sync 00:17:15.427 14:58:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:15.427 14:58:01 -- nvmf/common.sh@120 -- # set +e 00:17:15.427 14:58:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:15.427 14:58:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:15.427 rmmod nvme_tcp 00:17:15.427 rmmod nvme_fabrics 00:17:15.427 rmmod nvme_keyring 00:17:15.427 14:58:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:15.427 14:58:01 -- nvmf/common.sh@124 -- # set -e 00:17:15.427 14:58:01 -- nvmf/common.sh@125 -- # return 0 00:17:15.427 14:58:01 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:17:15.427 14:58:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:15.427 14:58:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:15.427 14:58:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:15.427 14:58:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:15.427 14:58:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:15.427 14:58:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.427 14:58:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.427 14:58:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.951 14:58:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:17.951 14:58:03 -- target/multipath.sh@48 -- # exit 0 00:17:17.951 14:58:03 -- target/multipath.sh@1 -- # nvmftestfini 00:17:17.951 14:58:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:17.951 14:58:03 -- nvmf/common.sh@117 -- # sync 00:17:17.951 14:58:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:17.951 14:58:03 -- nvmf/common.sh@120 -- # set +e 00:17:17.951 14:58:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:17.951 14:58:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:17.951 14:58:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:17.951 14:58:03 -- nvmf/common.sh@124 -- # set -e 00:17:17.951 14:58:03 -- nvmf/common.sh@125 -- # return 0 00:17:17.951 14:58:03 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:17:17.951 14:58:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:17.951 14:58:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:17.951 14:58:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:17.951 14:58:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:17.951 14:58:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:17.952 14:58:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.952 14:58:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.952 14:58:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.952 14:58:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:17.952 00:17:17.952 real 0m4.346s 00:17:17.952 user 0m0.804s 00:17:17.952 sys 0m1.539s 00:17:17.952 14:58:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:17.952 14:58:03 -- common/autotest_common.sh@10 -- # set +x 00:17:17.952 ************************************ 00:17:17.952 END TEST nvmf_multipath 00:17:17.952 ************************************ 00:17:17.952 14:58:03 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:17.952 14:58:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:17.952 14:58:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:17.952 14:58:03 -- common/autotest_common.sh@10 -- # set +x 00:17:17.952 ************************************ 00:17:17.952 START TEST nvmf_zcopy 00:17:17.952 ************************************ 00:17:17.952 14:58:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:17.952 * Looking for test storage... 00:17:17.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:17.952 14:58:03 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:17.952 14:58:03 -- nvmf/common.sh@7 -- # uname -s 00:17:17.952 14:58:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.952 14:58:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.952 14:58:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.952 14:58:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.952 14:58:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.952 14:58:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.952 14:58:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.952 14:58:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.952 14:58:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.952 14:58:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.952 14:58:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:17.952 14:58:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:17.952 14:58:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.952 14:58:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.952 14:58:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:17.952 14:58:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.952 14:58:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:17.952 14:58:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.952 14:58:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.952 14:58:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.952 14:58:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.952 14:58:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.952 14:58:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.952 14:58:03 -- paths/export.sh@5 -- # export PATH 00:17:17.952 14:58:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.952 14:58:03 -- nvmf/common.sh@47 -- # : 0 00:17:17.952 14:58:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:17.952 14:58:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:17.952 14:58:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.952 14:58:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.952 14:58:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.952 14:58:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:17.952 14:58:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:17.952 14:58:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:17.952 14:58:03 -- target/zcopy.sh@12 -- # nvmftestinit 00:17:17.952 14:58:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:17.952 14:58:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.952 14:58:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:17.952 14:58:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:17.952 14:58:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:17.952 14:58:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.952 14:58:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.952 14:58:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.952 14:58:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:17.952 14:58:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:17.952 14:58:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:17.952 14:58:03 -- common/autotest_common.sh@10 -- # set +x 00:17:19.847 14:58:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:19.847 14:58:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:19.847 14:58:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:19.847 14:58:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:19.847 14:58:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:19.847 14:58:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:19.847 14:58:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:19.847 14:58:05 -- nvmf/common.sh@295 -- # net_devs=() 00:17:19.847 14:58:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:19.847 14:58:05 -- nvmf/common.sh@296 -- # e810=() 00:17:19.847 14:58:05 -- nvmf/common.sh@296 -- # local -ga e810 00:17:19.847 14:58:05 -- nvmf/common.sh@297 -- # x722=() 00:17:19.847 14:58:05 -- nvmf/common.sh@297 -- # local -ga x722 00:17:19.847 14:58:05 -- nvmf/common.sh@298 -- # mlx=() 00:17:19.847 14:58:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:19.847 14:58:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:19.847 14:58:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:19.847 14:58:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:19.847 14:58:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:19.847 14:58:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:19.847 14:58:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:19.847 14:58:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:19.847 14:58:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:19.847 14:58:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:19.847 14:58:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:19.847 14:58:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:19.847 14:58:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:19.847 14:58:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:19.847 14:58:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:19.847 14:58:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:19.847 14:58:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:19.847 14:58:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:19.847 14:58:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:19.847 14:58:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:19.847 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:19.847 14:58:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:19.847 14:58:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:19.847 14:58:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.847 14:58:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.847 14:58:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:19.847 14:58:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:19.847 14:58:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:19.847 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:19.847 14:58:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:19.847 14:58:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:19.847 14:58:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.847 14:58:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.847 14:58:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:19.847 14:58:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:19.847 14:58:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:19.847 14:58:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:19.847 14:58:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:19.847 14:58:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.847 14:58:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:19.847 14:58:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.847 14:58:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:19.847 Found net devices under 0000:84:00.0: cvl_0_0 00:17:19.847 14:58:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.847 14:58:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:19.847 14:58:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.847 14:58:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:19.847 14:58:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.847 14:58:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:19.847 Found net devices under 0000:84:00.1: cvl_0_1 00:17:19.847 14:58:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.847 14:58:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:19.847 14:58:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:19.847 14:58:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:19.847 14:58:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:19.847 14:58:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:19.847 14:58:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.847 14:58:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.847 14:58:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:19.847 14:58:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:19.847 14:58:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:19.847 14:58:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:19.847 14:58:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:19.847 14:58:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:19.847 14:58:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.847 14:58:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:19.847 14:58:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:19.847 14:58:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:19.847 14:58:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:19.847 14:58:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:19.847 14:58:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:19.847 14:58:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:19.847 14:58:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:19.847 14:58:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:19.847 14:58:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:19.847 14:58:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:19.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:17:19.847 00:17:19.847 --- 10.0.0.2 ping statistics --- 00:17:19.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.847 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:17:19.847 14:58:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:19.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:17:19.847 00:17:19.847 --- 10.0.0.1 ping statistics --- 00:17:19.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.847 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:17:19.847 14:58:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.847 14:58:05 -- nvmf/common.sh@411 -- # return 0 00:17:19.847 14:58:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:19.847 14:58:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.848 14:58:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:19.848 14:58:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:19.848 14:58:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.848 14:58:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:19.848 14:58:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:19.848 14:58:05 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:19.848 14:58:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:19.848 14:58:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:19.848 14:58:05 -- common/autotest_common.sh@10 -- # set +x 00:17:19.848 14:58:05 -- nvmf/common.sh@470 -- # nvmfpid=3774750 00:17:19.848 14:58:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:19.848 14:58:05 -- nvmf/common.sh@471 -- # waitforlisten 3774750 00:17:19.848 14:58:05 -- common/autotest_common.sh@817 -- # '[' -z 3774750 ']' 00:17:19.848 14:58:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.848 14:58:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:19.848 14:58:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.848 14:58:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:19.848 14:58:05 -- common/autotest_common.sh@10 -- # set +x 00:17:19.848 [2024-04-26 14:58:05.496572] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:17:19.848 [2024-04-26 14:58:05.496651] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.848 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.848 [2024-04-26 14:58:05.536992] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:19.848 [2024-04-26 14:58:05.563537] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.105 [2024-04-26 14:58:05.646249] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.105 [2024-04-26 14:58:05.646331] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.105 [2024-04-26 14:58:05.646352] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.105 [2024-04-26 14:58:05.646364] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.105 [2024-04-26 14:58:05.646373] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.105 [2024-04-26 14:58:05.646410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.105 14:58:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:20.105 14:58:05 -- common/autotest_common.sh@850 -- # return 0 00:17:20.105 14:58:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:20.105 14:58:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:20.105 14:58:05 -- common/autotest_common.sh@10 -- # set +x 00:17:20.105 14:58:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.105 14:58:05 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:20.105 14:58:05 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:20.105 14:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.105 14:58:05 -- common/autotest_common.sh@10 -- # set +x 00:17:20.105 [2024-04-26 14:58:05.785551] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.105 14:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.105 14:58:05 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:20.105 14:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.105 14:58:05 -- common/autotest_common.sh@10 -- # set +x 00:17:20.105 14:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.105 14:58:05 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.105 14:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.105 14:58:05 -- common/autotest_common.sh@10 -- # set +x 00:17:20.105 [2024-04-26 14:58:05.801761] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.105 14:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.105 14:58:05 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:20.105 14:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.105 14:58:05 -- common/autotest_common.sh@10 -- # set +x 00:17:20.105 14:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.105 14:58:05 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:20.105 14:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.105 14:58:05 -- common/autotest_common.sh@10 -- # set +x 00:17:20.105 malloc0 00:17:20.105 14:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.105 14:58:05 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:20.105 14:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.105 14:58:05 -- common/autotest_common.sh@10 -- # set +x 00:17:20.105 14:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.105 14:58:05 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:20.105 14:58:05 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:20.105 14:58:05 -- nvmf/common.sh@521 -- # config=() 00:17:20.105 14:58:05 -- nvmf/common.sh@521 -- # local subsystem config 00:17:20.105 14:58:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:20.105 14:58:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:20.105 { 00:17:20.105 "params": { 00:17:20.105 "name": "Nvme$subsystem", 00:17:20.105 "trtype": "$TEST_TRANSPORT", 00:17:20.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:20.105 "adrfam": "ipv4", 00:17:20.105 "trsvcid": "$NVMF_PORT", 00:17:20.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:20.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:20.105 "hdgst": ${hdgst:-false}, 00:17:20.105 "ddgst": ${ddgst:-false} 00:17:20.105 }, 00:17:20.105 "method": "bdev_nvme_attach_controller" 00:17:20.105 } 00:17:20.105 EOF 00:17:20.105 )") 00:17:20.105 14:58:05 -- nvmf/common.sh@543 -- # cat 00:17:20.106 14:58:05 -- nvmf/common.sh@545 -- # jq . 00:17:20.363 14:58:05 -- nvmf/common.sh@546 -- # IFS=, 00:17:20.363 14:58:05 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:20.363 "params": { 00:17:20.363 "name": "Nvme1", 00:17:20.363 "trtype": "tcp", 00:17:20.363 "traddr": "10.0.0.2", 00:17:20.363 "adrfam": "ipv4", 00:17:20.363 "trsvcid": "4420", 00:17:20.363 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.363 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:20.363 "hdgst": false, 00:17:20.363 "ddgst": false 00:17:20.363 }, 00:17:20.363 "method": "bdev_nvme_attach_controller" 00:17:20.363 }' 00:17:20.363 [2024-04-26 14:58:05.881761] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:17:20.363 [2024-04-26 14:58:05.881843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3774892 ] 00:17:20.363 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.363 [2024-04-26 14:58:05.912827] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:20.363 [2024-04-26 14:58:05.944825] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.363 [2024-04-26 14:58:06.037415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.620 Running I/O for 10 seconds... 00:17:30.690 00:17:30.690 Latency(us) 00:17:30.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.690 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:30.690 Verification LBA range: start 0x0 length 0x1000 00:17:30.690 Nvme1n1 : 10.01 5636.22 44.03 0.00 0.00 22649.27 885.95 33981.63 00:17:30.690 =================================================================================================================== 00:17:30.690 Total : 5636.22 44.03 0.00 0.00 22649.27 885.95 33981.63 00:17:30.948 14:58:16 -- target/zcopy.sh@39 -- # perfpid=3776080 00:17:30.948 14:58:16 -- target/zcopy.sh@41 -- # xtrace_disable 00:17:30.948 14:58:16 -- common/autotest_common.sh@10 -- # set +x 00:17:30.948 14:58:16 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:30.948 14:58:16 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:30.948 14:58:16 -- nvmf/common.sh@521 -- # config=() 00:17:30.948 14:58:16 -- nvmf/common.sh@521 -- # local subsystem config 00:17:30.948 14:58:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:30.948 14:58:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:30.948 { 00:17:30.948 "params": { 00:17:30.948 "name": "Nvme$subsystem", 00:17:30.948 "trtype": "$TEST_TRANSPORT", 00:17:30.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:30.948 "adrfam": "ipv4", 00:17:30.948 "trsvcid": "$NVMF_PORT", 00:17:30.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:30.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:30.948 "hdgst": ${hdgst:-false}, 00:17:30.948 "ddgst": ${ddgst:-false} 00:17:30.948 }, 00:17:30.948 "method": "bdev_nvme_attach_controller" 00:17:30.948 } 00:17:30.948 EOF 00:17:30.948 )") 00:17:30.948 14:58:16 -- nvmf/common.sh@543 -- # cat 00:17:30.948 [2024-04-26 14:58:16.536226] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.948 [2024-04-26 14:58:16.536271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.948 14:58:16 -- nvmf/common.sh@545 -- # jq . 00:17:30.948 14:58:16 -- nvmf/common.sh@546 -- # IFS=, 00:17:30.948 14:58:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:30.948 "params": { 00:17:30.948 "name": "Nvme1", 00:17:30.948 "trtype": "tcp", 00:17:30.948 "traddr": "10.0.0.2", 00:17:30.948 "adrfam": "ipv4", 00:17:30.948 "trsvcid": "4420", 00:17:30.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:30.948 "hdgst": false, 00:17:30.948 "ddgst": false 00:17:30.948 }, 00:17:30.948 "method": "bdev_nvme_attach_controller" 00:17:30.948 }' 00:17:30.948 [2024-04-26 14:58:16.544181] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.948 [2024-04-26 14:58:16.544206] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.948 [2024-04-26 14:58:16.552202] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.948 [2024-04-26 14:58:16.552225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.949 [2024-04-26 14:58:16.560221] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.949 [2024-04-26 14:58:16.560243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.949 [2024-04-26 14:58:16.568241] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.949 [2024-04-26 14:58:16.568270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.949 [2024-04-26 14:58:16.573429] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:17:30.949 [2024-04-26 14:58:16.573487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3776080 ] 00:17:30.949 [2024-04-26 14:58:16.576262] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.949 [2024-04-26 14:58:16.576284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.949 [2024-04-26 14:58:16.584288] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.949 [2024-04-26 14:58:16.584336] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.949 [2024-04-26 14:58:16.592320] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.949 [2024-04-26 14:58:16.592341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.949 [2024-04-26 14:58:16.600339] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.949 [2024-04-26 14:58:16.600359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.949 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.949 [2024-04-26 14:58:16.607599] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:30.949 [2024-04-26 14:58:16.608380] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.949 [2024-04-26 14:58:16.608406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.949 [2024-04-26 14:58:16.616397] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.949 [2024-04-26 14:58:16.616421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.949 [2024-04-26 14:58:16.624421] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.949 [2024-04-26 14:58:16.624446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.949 [2024-04-26 14:58:16.632448] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.949 [2024-04-26 14:58:16.632473] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.949 [2024-04-26 14:58:16.638393] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.949 [2024-04-26 14:58:16.640462] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.949 [2024-04-26 14:58:16.640487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.949 [2024-04-26 14:58:16.648521] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.949 [2024-04-26 14:58:16.648563] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.949 [2024-04-26 14:58:16.656524] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.949 [2024-04-26 14:58:16.656555] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.949 [2024-04-26 14:58:16.664531] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.949 [2024-04-26 14:58:16.664557] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.949 [2024-04-26 14:58:16.672552] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.949 [2024-04-26 14:58:16.672577] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.949 [2024-04-26 14:58:16.680572] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.949 [2024-04-26 14:58:16.680599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.949 [2024-04-26 14:58:16.688590] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.949 [2024-04-26 14:58:16.688615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.208 [2024-04-26 14:58:16.696648] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.208 [2024-04-26 14:58:16.696687] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.208 [2024-04-26 14:58:16.704652] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.208 [2024-04-26 14:58:16.704686] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.208 [2024-04-26 14:58:16.712645] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.208 [2024-04-26 14:58:16.712666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.208 [2024-04-26 14:58:16.720668] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.208 [2024-04-26 14:58:16.720689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.208 [2024-04-26 14:58:16.728688] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.208 [2024-04-26 14:58:16.728709] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.208 [2024-04-26 14:58:16.733129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.209 [2024-04-26 14:58:16.736709] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.736728] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.744732] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.744753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.752798] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.752832] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.760814] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.760854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.768836] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.768874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.776858] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.776898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.784881] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.784918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.792902] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.792943] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.800893] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.800919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.808934] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.808969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.816961] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.817017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.824987] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.825047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.832970] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.832991] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.840987] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.841039] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.849075] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.849099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.857105] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.857130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.865103] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.865127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.873125] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.873148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.881143] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.881167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.889165] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.889187] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.897188] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.897209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.905211] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.905234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.913234] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.913256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.921320] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.921343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.929326] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.929350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.937347] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.937383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.209 [2024-04-26 14:58:16.945369] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.209 [2024-04-26 14:58:16.945391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:16.953388] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:16.953410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:16.961415] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:16.961435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:16.969424] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:16.969445] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:16.977456] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:16.977478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:16.985477] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:16.985497] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:16.993501] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:16.993526] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:17.001524] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:17.001545] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:17.009551] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:17.009572] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:17.017576] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:17.017598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:17.025594] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:17.025614] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:17.033633] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:17.033658] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:17.041640] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:17.041661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 Running I/O for 5 seconds... 00:17:31.468 [2024-04-26 14:58:17.049681] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:17.049704] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:17.062382] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:17.062408] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:17.072731] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:17.072757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:17.083699] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:17.083724] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:17.095115] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:17.095142] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:17.106288] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:17.106333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:17.118955] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:17.118979] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:17.128886] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:17.128912] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.468 [2024-04-26 14:58:17.139453] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.468 [2024-04-26 14:58:17.139478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.469 [2024-04-26 14:58:17.151851] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.469 [2024-04-26 14:58:17.151876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.469 [2024-04-26 14:58:17.163347] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.469 [2024-04-26 14:58:17.163387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.469 [2024-04-26 14:58:17.172563] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.469 [2024-04-26 14:58:17.172589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.469 [2024-04-26 14:58:17.184080] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.469 [2024-04-26 14:58:17.184106] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.469 [2024-04-26 14:58:17.194963] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.469 [2024-04-26 14:58:17.194987] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.469 [2024-04-26 14:58:17.205936] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.469 [2024-04-26 14:58:17.205962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.217015] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.217048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.227635] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.227660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.238362] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.238388] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.248780] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.248804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.259793] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.259819] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.270481] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.270506] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.281342] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.281368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.292312] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.292338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.304283] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.304325] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.313656] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.313682] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.325656] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.325682] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.336412] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.336437] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.346980] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.347030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.357643] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.357668] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.369434] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.369468] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.381232] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.381258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.393141] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.393167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.404977] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.405007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.416562] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.416593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.428401] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.428432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.440169] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.440195] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.452251] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.452278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.727 [2024-04-26 14:58:17.464154] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.727 [2024-04-26 14:58:17.464180] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.475857] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.475888] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.487670] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.487701] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.499038] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.499080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.510929] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.510959] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.523216] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.523242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.535789] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.535819] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.547752] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.547782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.559460] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.559491] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.572004] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.572047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.583624] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.583654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.595151] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.595176] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.606643] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.606674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.618569] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.618600] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.630990] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.631033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.643599] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.643629] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.655551] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.655582] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.668116] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.668143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.679848] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.679879] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.691415] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.691446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.704778] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.704809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.986 [2024-04-26 14:58:17.717104] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.986 [2024-04-26 14:58:17.717131] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.730267] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.730296] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.741936] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.741966] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.753582] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.753613] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.765232] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.765259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.776608] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.776638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.788638] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.788667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.799987] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.800016] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.812395] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.812425] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.824640] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.824670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.836750] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.836790] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.848717] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.848748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.860651] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.860682] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.872190] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.872216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.883707] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.883738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.895882] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.895912] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.907546] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.907577] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.919281] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.919324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.931374] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.931404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.943276] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.943319] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.955499] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.955530] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.967640] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.967670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.245 [2024-04-26 14:58:17.981435] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.245 [2024-04-26 14:58:17.981465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.523 [2024-04-26 14:58:17.993316] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.523 [2024-04-26 14:58:17.993347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.523 [2024-04-26 14:58:18.005029] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.523 [2024-04-26 14:58:18.005072] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.523 [2024-04-26 14:58:18.017228] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.523 [2024-04-26 14:58:18.017254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.523 [2024-04-26 14:58:18.029320] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.523 [2024-04-26 14:58:18.029350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.523 [2024-04-26 14:58:18.041150] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.523 [2024-04-26 14:58:18.041176] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.523 [2024-04-26 14:58:18.053498] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.523 [2024-04-26 14:58:18.053528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.523 [2024-04-26 14:58:18.065744] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.523 [2024-04-26 14:58:18.065782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.523 [2024-04-26 14:58:18.078162] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.523 [2024-04-26 14:58:18.078188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.523 [2024-04-26 14:58:18.091525] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.523 [2024-04-26 14:58:18.091556] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.523 [2024-04-26 14:58:18.101893] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.523 [2024-04-26 14:58:18.101924] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.523 [2024-04-26 14:58:18.114076] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.523 [2024-04-26 14:58:18.114102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.524 [2024-04-26 14:58:18.125974] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.524 [2024-04-26 14:58:18.126006] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.524 [2024-04-26 14:58:18.137633] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.524 [2024-04-26 14:58:18.137664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.524 [2024-04-26 14:58:18.151329] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.524 [2024-04-26 14:58:18.151360] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.524 [2024-04-26 14:58:18.162641] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.524 [2024-04-26 14:58:18.162671] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.524 [2024-04-26 14:58:18.174397] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.524 [2024-04-26 14:58:18.174428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.524 [2024-04-26 14:58:18.186240] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.524 [2024-04-26 14:58:18.186266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.524 [2024-04-26 14:58:18.200145] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.524 [2024-04-26 14:58:18.200171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.524 [2024-04-26 14:58:18.211380] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.524 [2024-04-26 14:58:18.211413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.524 [2024-04-26 14:58:18.222921] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.524 [2024-04-26 14:58:18.222951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.524 [2024-04-26 14:58:18.234445] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.524 [2024-04-26 14:58:18.234476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.524 [2024-04-26 14:58:18.246076] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.524 [2024-04-26 14:58:18.246103] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.524 [2024-04-26 14:58:18.257555] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.524 [2024-04-26 14:58:18.257584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.269636] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.269667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.281360] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.281390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.294937] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.294976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.306825] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.306854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.318886] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.318916] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.330834] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.330864] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.342654] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.342684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.355034] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.355075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.367255] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.367281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.378533] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.378559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.389973] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.389997] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.400557] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.400581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.412686] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.412711] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.422812] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.422836] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.434392] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.434416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.444809] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.444833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.455593] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.455617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.466560] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.466584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.477404] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.477428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.487834] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.487859] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.498610] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.498635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.509785] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.509816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.782 [2024-04-26 14:58:18.520696] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:32.782 [2024-04-26 14:58:18.520723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.040 [2024-04-26 14:58:18.532614] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.040 [2024-04-26 14:58:18.532638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.040 [2024-04-26 14:58:18.543134] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.040 [2024-04-26 14:58:18.543160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.040 [2024-04-26 14:58:18.553933] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.040 [2024-04-26 14:58:18.553957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.040 [2024-04-26 14:58:18.564910] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.040 [2024-04-26 14:58:18.564934] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.040 [2024-04-26 14:58:18.576277] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.040 [2024-04-26 14:58:18.576317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.040 [2024-04-26 14:58:18.587078] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.040 [2024-04-26 14:58:18.587103] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.040 [2024-04-26 14:58:18.597808] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.040 [2024-04-26 14:58:18.597832] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.040 [2024-04-26 14:58:18.610641] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.040 [2024-04-26 14:58:18.610665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.040 [2024-04-26 14:58:18.620834] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.040 [2024-04-26 14:58:18.620858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.040 [2024-04-26 14:58:18.631938] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.040 [2024-04-26 14:58:18.631962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.040 [2024-04-26 14:58:18.642757] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.040 [2024-04-26 14:58:18.642781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.040 [2024-04-26 14:58:18.653861] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.040 [2024-04-26 14:58:18.653885] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.040 [2024-04-26 14:58:18.664931] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.040 [2024-04-26 14:58:18.664955] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.040 [2024-04-26 14:58:18.675846] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.041 [2024-04-26 14:58:18.675870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.041 [2024-04-26 14:58:18.688235] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.041 [2024-04-26 14:58:18.688262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.041 [2024-04-26 14:58:18.698265] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.041 [2024-04-26 14:58:18.698306] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.041 [2024-04-26 14:58:18.709879] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.041 [2024-04-26 14:58:18.709903] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.041 [2024-04-26 14:58:18.720409] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.041 [2024-04-26 14:58:18.720434] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.041 [2024-04-26 14:58:18.731614] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.041 [2024-04-26 14:58:18.731638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.041 [2024-04-26 14:58:18.742214] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.041 [2024-04-26 14:58:18.742241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.041 [2024-04-26 14:58:18.752817] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.041 [2024-04-26 14:58:18.752841] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.041 [2024-04-26 14:58:18.763614] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.041 [2024-04-26 14:58:18.763638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.041 [2024-04-26 14:58:18.774215] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.041 [2024-04-26 14:58:18.774241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.785628] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.785652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.796396] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.796419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.807085] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.807111] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.818095] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.818121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.828628] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.828653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.839560] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.839584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.851625] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.851649] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.861359] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.861398] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.871586] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.871610] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.882198] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.882224] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.892891] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.892916] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.903667] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.903692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.914230] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.914257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.925406] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.925431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.936182] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.936208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.946947] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.946972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.959439] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.959464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.969145] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.969172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.980772] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.980798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:18.993777] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:18.993803] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:19.003866] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:19.003891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:19.014585] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:19.014609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:19.027427] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:19.027452] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.299 [2024-04-26 14:58:19.038214] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.299 [2024-04-26 14:58:19.038242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.050460] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.050490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.062579] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.062609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.074555] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.074585] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.086285] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.086324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.098364] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.098395] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.110039] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.110083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.121532] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.121563] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.133680] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.133710] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.146204] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.146229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.158132] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.158157] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.170001] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.170040] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.182095] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.182120] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.193482] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.193512] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.205289] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.205328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.216745] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.216774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.228921] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.228950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.240313] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.240338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.251888] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.251917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.263902] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.263932] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.275657] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.275687] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.558 [2024-04-26 14:58:19.287527] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.558 [2024-04-26 14:58:19.287557] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.299777] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.299807] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.311231] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.311256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.322812] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.322842] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.334873] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.334902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.347156] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.347182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.359074] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.359099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.373115] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.373141] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.385064] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.385097] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.398805] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.398835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.410122] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.410147] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.422083] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.422108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.433928] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.433957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.446027] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.446070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.458097] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.458122] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.470291] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.470334] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.482408] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.482438] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.494898] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.494928] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.506764] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.506794] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.518548] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.518578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.530471] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.530501] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.542447] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.542477] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.817 [2024-04-26 14:58:19.554731] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:33.817 [2024-04-26 14:58:19.554761] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.567115] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.567141] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.578846] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.578876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.591180] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.591214] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.602665] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.602695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.615156] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.615182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.626970] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.627000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.639081] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.639107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.651281] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.651323] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.663322] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.663346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.675306] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.675331] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.687236] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.687262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.699834] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.699863] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.712131] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.712156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.724642] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.724672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.736808] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.736837] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.748366] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.748396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.760440] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.760470] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.772082] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.772108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.783786] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.783817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.795772] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.795802] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.076 [2024-04-26 14:58:19.809101] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.076 [2024-04-26 14:58:19.809128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:19.820394] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:19.820432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:19.832562] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:19.832592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:19.844558] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:19.844588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:19.856960] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:19.856990] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:19.869192] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:19.869217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:19.881086] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:19.881112] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:19.892637] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:19.892666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:19.905101] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:19.905126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:19.916642] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:19.916671] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:19.928416] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:19.928446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:19.941954] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:19.941984] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:19.952712] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:19.952742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:19.964989] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:19.965033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:19.976627] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:19.976658] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:19.988260] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:19.988285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:20.000202] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:20.000234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:20.013221] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:20.013250] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:20.024230] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:20.024257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:20.034667] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:20.034693] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:20.046155] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:20.046194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:20.058767] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:20.058799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.335 [2024-04-26 14:58:20.070886] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.335 [2024-04-26 14:58:20.070928] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.082784] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.082814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.095383] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.095414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.107550] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.107580] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.119668] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.119711] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.132585] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.132615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.145046] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.145087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.157740] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.157769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.170440] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.170471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.182273] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.182317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.194461] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.194491] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.206641] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.206672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.218659] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.218689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.232568] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.232599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.243860] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.243890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.255997] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.256035] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.267866] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.267895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.279695] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.279734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.291708] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.291738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.303718] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.303747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.315395] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.315425] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.594 [2024-04-26 14:58:20.326728] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.594 [2024-04-26 14:58:20.326758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.338598] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.338629] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.350478] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.350508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.362204] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.362229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.374311] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.374336] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.386486] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.386516] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.398462] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.398493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.410994] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.411048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.426096] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.426123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.437726] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.437755] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.449953] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.449983] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.462066] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.462094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.473930] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.473961] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.486262] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.486289] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.498124] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.498150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.510090] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.510117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.522221] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.522247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.533785] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.533815] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.545529] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.545559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.557702] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.557733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.570332] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.570361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.852 [2024-04-26 14:58:20.582542] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:34.852 [2024-04-26 14:58:20.582571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.594425] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.594455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.606643] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.606674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.618729] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.618759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.630513] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.630543] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.642536] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.642566] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.654252] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.654279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.665989] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.666027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.677772] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.677802] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.690169] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.690194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.701872] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.701902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.714216] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.714242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.726006] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.726046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.739662] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.739692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.751484] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.751514] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.763688] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.763718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.775673] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.775702] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.787989] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.788027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.800084] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.800109] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.812089] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.812114] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.824226] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.110 [2024-04-26 14:58:20.824267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.110 [2024-04-26 14:58:20.836431] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.111 [2024-04-26 14:58:20.836461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.111 [2024-04-26 14:58:20.848248] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.111 [2024-04-26 14:58:20.848276] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.368 [2024-04-26 14:58:20.860612] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.368 [2024-04-26 14:58:20.860641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.368 [2024-04-26 14:58:20.873133] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.368 [2024-04-26 14:58:20.873161] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.368 [2024-04-26 14:58:20.885207] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.368 [2024-04-26 14:58:20.885233] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.368 [2024-04-26 14:58:20.897446] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.368 [2024-04-26 14:58:20.897475] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.368 [2024-04-26 14:58:20.909394] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.368 [2024-04-26 14:58:20.909424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.368 [2024-04-26 14:58:20.921635] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.368 [2024-04-26 14:58:20.921665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.368 [2024-04-26 14:58:20.934032] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.368 [2024-04-26 14:58:20.934075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.368 [2024-04-26 14:58:20.945724] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.368 [2024-04-26 14:58:20.945754] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.368 [2024-04-26 14:58:20.957944] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.368 [2024-04-26 14:58:20.957973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.368 [2024-04-26 14:58:20.970000] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.368 [2024-04-26 14:58:20.970037] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.368 [2024-04-26 14:58:20.981932] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.368 [2024-04-26 14:58:20.981962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.368 [2024-04-26 14:58:20.995739] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.368 [2024-04-26 14:58:20.995769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.368 [2024-04-26 14:58:21.006774] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.368 [2024-04-26 14:58:21.006803] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.368 [2024-04-26 14:58:21.018834] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.368 [2024-04-26 14:58:21.018864] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.368 [2024-04-26 14:58:21.030913] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.368 [2024-04-26 14:58:21.030943] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.368 [2024-04-26 14:58:21.042698] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.368 [2024-04-26 14:58:21.042727] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.368 [2024-04-26 14:58:21.054656] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.368 [2024-04-26 14:58:21.054686] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.368 [2024-04-26 14:58:21.066811] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.368 [2024-04-26 14:58:21.066841] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.368 [2024-04-26 14:58:21.078742] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.369 [2024-04-26 14:58:21.078771] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.369 [2024-04-26 14:58:21.090867] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.369 [2024-04-26 14:58:21.090897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.369 [2024-04-26 14:58:21.102637] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.369 [2024-04-26 14:58:21.102666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.114962] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.114993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.127118] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.127143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.140088] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.140113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.152548] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.152578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.164561] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.164591] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.176926] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.176955] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.189339] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.189370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.201587] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.201617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.212986] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.213016] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.224347] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.224378] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.236219] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.236246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.248280] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.248321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.260085] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.260111] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.271503] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.271534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.283277] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.283316] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.294972] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.295002] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.306885] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.306916] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.319173] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.319200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.331287] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.331330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.343077] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.343103] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.354556] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.354586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.627 [2024-04-26 14:58:21.366773] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.627 [2024-04-26 14:58:21.366803] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.885 [2024-04-26 14:58:21.379044] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.885 [2024-04-26 14:58:21.379085] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.885 [2024-04-26 14:58:21.391099] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.885 [2024-04-26 14:58:21.391125] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.885 [2024-04-26 14:58:21.403096] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.886 [2024-04-26 14:58:21.403121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.886 [2024-04-26 14:58:21.414974] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.886 [2024-04-26 14:58:21.415011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.886 [2024-04-26 14:58:21.426829] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.886 [2024-04-26 14:58:21.426860] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.886 [2024-04-26 14:58:21.438587] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.886 [2024-04-26 14:58:21.438617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.886 [2024-04-26 14:58:21.450807] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.886 [2024-04-26 14:58:21.450837] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.886 [2024-04-26 14:58:21.463101] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.886 [2024-04-26 14:58:21.463127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.886 [2024-04-26 14:58:21.475157] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.886 [2024-04-26 14:58:21.475183] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.886 [2024-04-26 14:58:21.487255] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.886 [2024-04-26 14:58:21.487280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.886 [2024-04-26 14:58:21.499815] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.886 [2024-04-26 14:58:21.499845] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.886 [2024-04-26 14:58:21.512292] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.886 [2024-04-26 14:58:21.512338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.886 [2024-04-26 14:58:21.524531] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.886 [2024-04-26 14:58:21.524562] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.886 [2024-04-26 14:58:21.536451] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.886 [2024-04-26 14:58:21.536482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.886 [2024-04-26 14:58:21.548326] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.886 [2024-04-26 14:58:21.548352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.886 [2024-04-26 14:58:21.560035] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.886 [2024-04-26 14:58:21.560076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.886 [2024-04-26 14:58:21.571587] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.886 [2024-04-26 14:58:21.571627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.886 [2024-04-26 14:58:21.583110] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.886 [2024-04-26 14:58:21.583135] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.886 [2024-04-26 14:58:21.594549] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.886 [2024-04-26 14:58:21.594578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.886 [2024-04-26 14:58:21.607789] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.886 [2024-04-26 14:58:21.607819] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:35.886 [2024-04-26 14:58:21.618870] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:35.886 [2024-04-26 14:58:21.618899] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.630907] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.630937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.646808] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.646847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.658612] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.658642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.670446] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.670476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.682214] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.682239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.694104] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.694130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.706086] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.706112] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.717614] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.717643] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.729793] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.729823] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.741499] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.741529] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.753511] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.753540] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.765342] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.765383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.777285] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.777324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.788755] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.788785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.800827] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.800857] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.812565] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.812595] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.824183] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.824208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.835292] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.835330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.847706] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.847737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.859498] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.859527] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.871113] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.871145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.144 [2024-04-26 14:58:21.882646] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.144 [2024-04-26 14:58:21.882676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.402 [2024-04-26 14:58:21.894766] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.402 [2024-04-26 14:58:21.894796] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.402 [2024-04-26 14:58:21.906351] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.402 [2024-04-26 14:58:21.906393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.402 [2024-04-26 14:58:21.919865] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.402 [2024-04-26 14:58:21.919894] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.402 [2024-04-26 14:58:21.931346] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.402 [2024-04-26 14:58:21.931389] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.402 [2024-04-26 14:58:21.942852] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.402 [2024-04-26 14:58:21.942882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.402 [2024-04-26 14:58:21.955343] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.402 [2024-04-26 14:58:21.955383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.402 [2024-04-26 14:58:21.967425] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.402 [2024-04-26 14:58:21.967467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.402 [2024-04-26 14:58:21.979358] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.402 [2024-04-26 14:58:21.979401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.402 [2024-04-26 14:58:21.991204] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.402 [2024-04-26 14:58:21.991231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.402 [2024-04-26 14:58:22.003002] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.402 [2024-04-26 14:58:22.003042] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.402 [2024-04-26 14:58:22.014706] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.402 [2024-04-26 14:58:22.014736] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.402 [2024-04-26 14:58:22.026767] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.402 [2024-04-26 14:58:22.026797] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.402 [2024-04-26 14:58:22.038988] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.402 [2024-04-26 14:58:22.039030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.402 [2024-04-26 14:58:22.051149] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.402 [2024-04-26 14:58:22.051176] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.402 [2024-04-26 14:58:22.061959] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.402 [2024-04-26 14:58:22.061989] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.402 [2024-04-26 14:58:22.067323] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.402 [2024-04-26 14:58:22.067352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.402 00:17:36.402 Latency(us) 00:17:36.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.402 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:36.402 Nvme1n1 : 5.01 10846.61 84.74 0.00 0.00 11783.85 4878.79 19126.80 00:17:36.402 =================================================================================================================== 00:17:36.402 Total : 10846.61 84.74 0.00 0.00 11783.85 4878.79 19126.80 00:17:36.403 [2024-04-26 14:58:22.075341] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.403 [2024-04-26 14:58:22.075379] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.403 [2024-04-26 14:58:22.083345] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.403 [2024-04-26 14:58:22.083369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.403 [2024-04-26 14:58:22.091439] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.403 [2024-04-26 14:58:22.091488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.403 [2024-04-26 14:58:22.099455] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.403 [2024-04-26 14:58:22.099502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.403 [2024-04-26 14:58:22.107475] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.403 [2024-04-26 14:58:22.107520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.403 [2024-04-26 14:58:22.115487] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.403 [2024-04-26 14:58:22.115534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.403 [2024-04-26 14:58:22.123521] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.403 [2024-04-26 14:58:22.123568] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.403 [2024-04-26 14:58:22.131545] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.403 [2024-04-26 14:58:22.131589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.403 [2024-04-26 14:58:22.139567] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.403 [2024-04-26 14:58:22.139617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.661 [2024-04-26 14:58:22.147590] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.661 [2024-04-26 14:58:22.147638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.661 [2024-04-26 14:58:22.155605] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.661 [2024-04-26 14:58:22.155654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.661 [2024-04-26 14:58:22.163623] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.661 [2024-04-26 14:58:22.163670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.661 [2024-04-26 14:58:22.171650] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.661 [2024-04-26 14:58:22.171696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.661 [2024-04-26 14:58:22.179671] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.661 [2024-04-26 14:58:22.179719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.661 [2024-04-26 14:58:22.187682] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.661 [2024-04-26 14:58:22.187726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.661 [2024-04-26 14:58:22.195665] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.661 [2024-04-26 14:58:22.195691] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.661 [2024-04-26 14:58:22.203681] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.661 [2024-04-26 14:58:22.203709] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.661 [2024-04-26 14:58:22.211762] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.661 [2024-04-26 14:58:22.211810] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.661 [2024-04-26 14:58:22.219774] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.661 [2024-04-26 14:58:22.219821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.661 [2024-04-26 14:58:22.227761] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.661 [2024-04-26 14:58:22.227790] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.661 [2024-04-26 14:58:22.235765] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.661 [2024-04-26 14:58:22.235789] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.661 [2024-04-26 14:58:22.243850] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.661 [2024-04-26 14:58:22.243897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.661 [2024-04-26 14:58:22.251867] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.661 [2024-04-26 14:58:22.251912] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.661 [2024-04-26 14:58:22.259836] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.661 [2024-04-26 14:58:22.259861] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.661 [2024-04-26 14:58:22.267856] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.661 [2024-04-26 14:58:22.267881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.661 [2024-04-26 14:58:22.275877] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:36.661 [2024-04-26 14:58:22.275902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:36.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3776080) - No such process 00:17:36.661 14:58:22 -- target/zcopy.sh@49 -- # wait 3776080 00:17:36.661 14:58:22 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:36.661 14:58:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.661 14:58:22 -- common/autotest_common.sh@10 -- # set +x 00:17:36.661 14:58:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.661 14:58:22 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:36.661 14:58:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.661 14:58:22 -- common/autotest_common.sh@10 -- # set +x 00:17:36.661 delay0 00:17:36.661 14:58:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.661 14:58:22 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:36.661 14:58:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.661 14:58:22 -- common/autotest_common.sh@10 -- # set +x 00:17:36.661 14:58:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.661 14:58:22 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:36.661 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.919 [2024-04-26 14:58:22.439153] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:43.475 Initializing NVMe Controllers 00:17:43.475 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:43.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:43.475 Initialization complete. Launching workers. 00:17:43.475 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 750 00:17:43.475 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1037, failed to submit 33 00:17:43.475 success 853, unsuccess 184, failed 0 00:17:43.475 14:58:28 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:43.475 14:58:28 -- target/zcopy.sh@60 -- # nvmftestfini 00:17:43.475 14:58:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:43.475 14:58:28 -- nvmf/common.sh@117 -- # sync 00:17:43.475 14:58:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:43.475 14:58:28 -- nvmf/common.sh@120 -- # set +e 00:17:43.475 14:58:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:43.475 14:58:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:43.475 rmmod nvme_tcp 00:17:43.475 rmmod nvme_fabrics 00:17:43.475 rmmod nvme_keyring 00:17:43.475 14:58:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:43.475 14:58:28 -- nvmf/common.sh@124 -- # set -e 00:17:43.475 14:58:28 -- nvmf/common.sh@125 -- # return 0 00:17:43.475 14:58:28 -- nvmf/common.sh@478 -- # '[' -n 3774750 ']' 00:17:43.475 14:58:28 -- nvmf/common.sh@479 -- # killprocess 3774750 00:17:43.475 14:58:28 -- common/autotest_common.sh@936 -- # '[' -z 3774750 ']' 00:17:43.475 14:58:28 -- common/autotest_common.sh@940 -- # kill -0 3774750 00:17:43.475 14:58:28 -- common/autotest_common.sh@941 -- # uname 00:17:43.475 14:58:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:43.475 14:58:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3774750 00:17:43.475 14:58:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:43.475 14:58:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:43.475 14:58:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3774750' 00:17:43.475 killing process with pid 3774750 00:17:43.475 14:58:28 -- common/autotest_common.sh@955 -- # kill 3774750 00:17:43.475 14:58:28 -- common/autotest_common.sh@960 -- # wait 3774750 00:17:43.475 14:58:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:43.476 14:58:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:43.476 14:58:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:43.476 14:58:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:43.476 14:58:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:43.476 14:58:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.476 14:58:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.476 14:58:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.378 14:58:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:45.378 00:17:45.378 real 0m27.800s 00:17:45.378 user 0m39.975s 00:17:45.378 sys 0m9.427s 00:17:45.378 14:58:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:45.378 14:58:31 -- common/autotest_common.sh@10 -- # set +x 00:17:45.378 ************************************ 00:17:45.378 END TEST nvmf_zcopy 00:17:45.378 ************************************ 00:17:45.378 14:58:31 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:45.378 14:58:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:45.378 14:58:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:45.378 14:58:31 -- common/autotest_common.sh@10 -- # set +x 00:17:45.636 ************************************ 00:17:45.636 START TEST nvmf_nmic 00:17:45.636 ************************************ 00:17:45.636 14:58:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:45.636 * Looking for test storage... 00:17:45.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:45.636 14:58:31 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.636 14:58:31 -- nvmf/common.sh@7 -- # uname -s 00:17:45.636 14:58:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.636 14:58:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.636 14:58:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.636 14:58:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.636 14:58:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.636 14:58:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.636 14:58:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.636 14:58:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.636 14:58:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.636 14:58:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.636 14:58:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:45.636 14:58:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:45.636 14:58:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.636 14:58:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.636 14:58:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.636 14:58:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.636 14:58:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.636 14:58:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.637 14:58:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.637 14:58:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.637 14:58:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.637 14:58:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.637 14:58:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.637 14:58:31 -- paths/export.sh@5 -- # export PATH 00:17:45.637 14:58:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.637 14:58:31 -- nvmf/common.sh@47 -- # : 0 00:17:45.637 14:58:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:45.637 14:58:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:45.637 14:58:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.637 14:58:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.637 14:58:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.637 14:58:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:45.637 14:58:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:45.637 14:58:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:45.637 14:58:31 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:45.637 14:58:31 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:45.637 14:58:31 -- target/nmic.sh@14 -- # nvmftestinit 00:17:45.637 14:58:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:45.637 14:58:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.637 14:58:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:45.637 14:58:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:45.637 14:58:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:45.637 14:58:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.637 14:58:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.637 14:58:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.637 14:58:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:45.637 14:58:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:45.637 14:58:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:45.637 14:58:31 -- common/autotest_common.sh@10 -- # set +x 00:17:48.165 14:58:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:48.165 14:58:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:48.165 14:58:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:48.165 14:58:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:48.165 14:58:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:48.165 14:58:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:48.165 14:58:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:48.165 14:58:33 -- nvmf/common.sh@295 -- # net_devs=() 00:17:48.165 14:58:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:48.165 14:58:33 -- nvmf/common.sh@296 -- # e810=() 00:17:48.166 14:58:33 -- nvmf/common.sh@296 -- # local -ga e810 00:17:48.166 14:58:33 -- nvmf/common.sh@297 -- # x722=() 00:17:48.166 14:58:33 -- nvmf/common.sh@297 -- # local -ga x722 00:17:48.166 14:58:33 -- nvmf/common.sh@298 -- # mlx=() 00:17:48.166 14:58:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:48.166 14:58:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.166 14:58:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.166 14:58:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.166 14:58:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.166 14:58:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.166 14:58:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.166 14:58:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.166 14:58:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.166 14:58:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.166 14:58:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.166 14:58:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.166 14:58:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:48.166 14:58:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:48.166 14:58:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:48.166 14:58:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:48.166 14:58:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:48.166 14:58:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:48.166 14:58:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:48.166 14:58:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:48.166 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:48.166 14:58:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:48.166 14:58:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:48.166 14:58:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.166 14:58:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.166 14:58:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:48.166 14:58:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:48.166 14:58:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:48.166 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:48.166 14:58:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:48.166 14:58:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:48.166 14:58:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.166 14:58:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.166 14:58:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:48.166 14:58:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:48.166 14:58:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:48.166 14:58:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:48.166 14:58:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:48.166 14:58:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.166 14:58:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:48.166 14:58:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.166 14:58:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:48.166 Found net devices under 0000:84:00.0: cvl_0_0 00:17:48.166 14:58:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.166 14:58:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:48.166 14:58:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.166 14:58:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:48.166 14:58:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.166 14:58:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:48.166 Found net devices under 0000:84:00.1: cvl_0_1 00:17:48.166 14:58:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.166 14:58:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:48.166 14:58:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:48.166 14:58:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:48.166 14:58:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:48.166 14:58:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:48.166 14:58:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.166 14:58:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.166 14:58:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:48.166 14:58:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:48.166 14:58:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:48.166 14:58:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:48.166 14:58:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:48.166 14:58:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:48.166 14:58:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.166 14:58:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:48.166 14:58:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:48.166 14:58:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:48.166 14:58:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:48.166 14:58:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:48.166 14:58:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:48.166 14:58:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:48.166 14:58:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:48.166 14:58:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:48.166 14:58:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:48.166 14:58:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:48.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:17:48.166 00:17:48.166 --- 10.0.0.2 ping statistics --- 00:17:48.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.166 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:17:48.166 14:58:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:48.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:17:48.166 00:17:48.166 --- 10.0.0.1 ping statistics --- 00:17:48.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.166 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:17:48.166 14:58:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.166 14:58:33 -- nvmf/common.sh@411 -- # return 0 00:17:48.166 14:58:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:48.166 14:58:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.166 14:58:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:48.166 14:58:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:48.166 14:58:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.166 14:58:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:48.166 14:58:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:48.166 14:58:33 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:48.166 14:58:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:48.166 14:58:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:48.166 14:58:33 -- common/autotest_common.sh@10 -- # set +x 00:17:48.166 14:58:33 -- nvmf/common.sh@470 -- # nvmfpid=3779484 00:17:48.166 14:58:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:48.166 14:58:33 -- nvmf/common.sh@471 -- # waitforlisten 3779484 00:17:48.166 14:58:33 -- common/autotest_common.sh@817 -- # '[' -z 3779484 ']' 00:17:48.166 14:58:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.166 14:58:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:48.166 14:58:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.166 14:58:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:48.166 14:58:33 -- common/autotest_common.sh@10 -- # set +x 00:17:48.166 [2024-04-26 14:58:33.533145] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:17:48.166 [2024-04-26 14:58:33.533235] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.166 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.166 [2024-04-26 14:58:33.571085] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:48.166 [2024-04-26 14:58:33.597385] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:48.166 [2024-04-26 14:58:33.681347] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.166 [2024-04-26 14:58:33.681403] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.166 [2024-04-26 14:58:33.681432] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.166 [2024-04-26 14:58:33.681444] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.166 [2024-04-26 14:58:33.681454] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.166 [2024-04-26 14:58:33.681517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.166 [2024-04-26 14:58:33.681871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.166 [2024-04-26 14:58:33.681928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:48.166 [2024-04-26 14:58:33.681931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.166 14:58:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:48.166 14:58:33 -- common/autotest_common.sh@850 -- # return 0 00:17:48.166 14:58:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:48.166 14:58:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:48.166 14:58:33 -- common/autotest_common.sh@10 -- # set +x 00:17:48.166 14:58:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.166 14:58:33 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:48.166 14:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.166 14:58:33 -- common/autotest_common.sh@10 -- # set +x 00:17:48.166 [2024-04-26 14:58:33.816511] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.166 14:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.166 14:58:33 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:48.166 14:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.166 14:58:33 -- common/autotest_common.sh@10 -- # set +x 00:17:48.166 Malloc0 00:17:48.166 14:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.166 14:58:33 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:48.166 14:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.166 14:58:33 -- common/autotest_common.sh@10 -- # set +x 00:17:48.166 14:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.166 14:58:33 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:48.166 14:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.166 14:58:33 -- common/autotest_common.sh@10 -- # set +x 00:17:48.166 14:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.166 14:58:33 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:48.166 14:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.166 14:58:33 -- common/autotest_common.sh@10 -- # set +x 00:17:48.166 [2024-04-26 14:58:33.867471] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.166 14:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.166 14:58:33 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:48.166 test case1: single bdev can't be used in multiple subsystems 00:17:48.166 14:58:33 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:48.166 14:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.166 14:58:33 -- common/autotest_common.sh@10 -- # set +x 00:17:48.166 14:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.166 14:58:33 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:48.166 14:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.166 14:58:33 -- common/autotest_common.sh@10 -- # set +x 00:17:48.166 14:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.166 14:58:33 -- target/nmic.sh@28 -- # nmic_status=0 00:17:48.166 14:58:33 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:48.166 14:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.166 14:58:33 -- common/autotest_common.sh@10 -- # set +x 00:17:48.166 [2024-04-26 14:58:33.891292] bdev.c:8005:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:48.166 [2024-04-26 14:58:33.891337] subsystem.c:1940:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:48.166 [2024-04-26 14:58:33.891356] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.166 request: 00:17:48.166 { 00:17:48.166 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:48.166 "namespace": { 00:17:48.166 "bdev_name": "Malloc0", 00:17:48.166 "no_auto_visible": false 00:17:48.166 }, 00:17:48.166 "method": "nvmf_subsystem_add_ns", 00:17:48.166 "req_id": 1 00:17:48.166 } 00:17:48.166 Got JSON-RPC error response 00:17:48.166 response: 00:17:48.166 { 00:17:48.166 "code": -32602, 00:17:48.166 "message": "Invalid parameters" 00:17:48.166 } 00:17:48.166 14:58:33 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:48.167 14:58:33 -- target/nmic.sh@29 -- # nmic_status=1 00:17:48.167 14:58:33 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:48.167 14:58:33 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:48.167 Adding namespace failed - expected result. 00:17:48.167 14:58:33 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:48.167 test case2: host connect to nvmf target in multiple paths 00:17:48.167 14:58:33 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:48.167 14:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.167 14:58:33 -- common/autotest_common.sh@10 -- # set +x 00:17:48.167 [2024-04-26 14:58:33.899411] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:48.167 14:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.167 14:58:33 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:49.100 14:58:34 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:49.664 14:58:35 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:49.664 14:58:35 -- common/autotest_common.sh@1184 -- # local i=0 00:17:49.664 14:58:35 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:49.664 14:58:35 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:49.664 14:58:35 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:51.557 14:58:37 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:51.557 14:58:37 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:51.557 14:58:37 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:51.557 14:58:37 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:17:51.557 14:58:37 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:51.557 14:58:37 -- common/autotest_common.sh@1194 -- # return 0 00:17:51.557 14:58:37 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:51.557 [global] 00:17:51.557 thread=1 00:17:51.557 invalidate=1 00:17:51.557 rw=write 00:17:51.557 time_based=1 00:17:51.557 runtime=1 00:17:51.557 ioengine=libaio 00:17:51.557 direct=1 00:17:51.557 bs=4096 00:17:51.557 iodepth=1 00:17:51.557 norandommap=0 00:17:51.557 numjobs=1 00:17:51.557 00:17:51.557 verify_dump=1 00:17:51.557 verify_backlog=512 00:17:51.557 verify_state_save=0 00:17:51.557 do_verify=1 00:17:51.557 verify=crc32c-intel 00:17:51.557 [job0] 00:17:51.557 filename=/dev/nvme0n1 00:17:51.557 Could not set queue depth (nvme0n1) 00:17:51.813 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:51.813 fio-3.35 00:17:51.813 Starting 1 thread 00:17:52.742 00:17:52.742 job0: (groupid=0, jobs=1): err= 0: pid=3779996: Fri Apr 26 14:58:38 2024 00:17:52.742 read: IOPS=1743, BW=6973KiB/s (7140kB/s)(6980KiB/1001msec) 00:17:52.742 slat (nsec): min=6714, max=59706, avg=12539.23, stdev=5446.25 00:17:52.742 clat (usec): min=205, max=619, avg=288.92, stdev=43.75 00:17:52.742 lat (usec): min=212, max=636, avg=301.46, stdev=47.91 00:17:52.742 clat percentiles (usec): 00:17:52.742 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 241], 00:17:52.742 | 30.00th=[ 255], 40.00th=[ 273], 50.00th=[ 297], 60.00th=[ 314], 00:17:52.742 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 343], 95.00th=[ 351], 00:17:52.742 | 99.00th=[ 375], 99.50th=[ 396], 99.90th=[ 412], 99.95th=[ 619], 00:17:52.742 | 99.99th=[ 619] 00:17:52.742 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:17:52.742 slat (usec): min=8, max=28929, avg=29.74, stdev=638.96 00:17:52.742 clat (usec): min=138, max=434, avg=194.14, stdev=36.92 00:17:52.742 lat (usec): min=147, max=29181, avg=223.88, stdev=641.61 00:17:52.742 clat percentiles (usec): 00:17:52.742 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:17:52.742 | 30.00th=[ 165], 40.00th=[ 174], 50.00th=[ 184], 60.00th=[ 210], 00:17:52.742 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 243], 95.00th=[ 251], 00:17:52.742 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 416], 99.95th=[ 416], 00:17:52.742 | 99.99th=[ 433] 00:17:52.742 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:17:52.742 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:52.742 lat (usec) : 250=63.20%, 500=36.78%, 750=0.03% 00:17:52.742 cpu : usr=3.70%, sys=7.40%, ctx=3796, majf=0, minf=2 00:17:52.742 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:52.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.742 issued rwts: total=1745,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.742 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:52.742 00:17:52.742 Run status group 0 (all jobs): 00:17:52.742 READ: bw=6973KiB/s (7140kB/s), 6973KiB/s-6973KiB/s (7140kB/s-7140kB/s), io=6980KiB (7148kB), run=1001-1001msec 00:17:52.742 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:17:52.742 00:17:52.742 Disk stats (read/write): 00:17:52.742 nvme0n1: ios=1562/1759, merge=0/0, ticks=1421/318, in_queue=1739, util=98.70% 00:17:52.742 14:58:38 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:52.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:52.999 14:58:38 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:52.999 14:58:38 -- common/autotest_common.sh@1205 -- # local i=0 00:17:52.999 14:58:38 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:52.999 14:58:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:52.999 14:58:38 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:53.000 14:58:38 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:53.000 14:58:38 -- common/autotest_common.sh@1217 -- # return 0 00:17:53.000 14:58:38 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:53.000 14:58:38 -- target/nmic.sh@53 -- # nvmftestfini 00:17:53.000 14:58:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:53.000 14:58:38 -- nvmf/common.sh@117 -- # sync 00:17:53.000 14:58:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.000 14:58:38 -- nvmf/common.sh@120 -- # set +e 00:17:53.000 14:58:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.000 14:58:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.000 rmmod nvme_tcp 00:17:53.000 rmmod nvme_fabrics 00:17:53.000 rmmod nvme_keyring 00:17:53.000 14:58:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.000 14:58:38 -- nvmf/common.sh@124 -- # set -e 00:17:53.000 14:58:38 -- nvmf/common.sh@125 -- # return 0 00:17:53.000 14:58:38 -- nvmf/common.sh@478 -- # '[' -n 3779484 ']' 00:17:53.000 14:58:38 -- nvmf/common.sh@479 -- # killprocess 3779484 00:17:53.000 14:58:38 -- common/autotest_common.sh@936 -- # '[' -z 3779484 ']' 00:17:53.000 14:58:38 -- common/autotest_common.sh@940 -- # kill -0 3779484 00:17:53.000 14:58:38 -- common/autotest_common.sh@941 -- # uname 00:17:53.000 14:58:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:53.000 14:58:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3779484 00:17:53.000 14:58:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:53.000 14:58:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:53.000 14:58:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3779484' 00:17:53.000 killing process with pid 3779484 00:17:53.000 14:58:38 -- common/autotest_common.sh@955 -- # kill 3779484 00:17:53.000 14:58:38 -- common/autotest_common.sh@960 -- # wait 3779484 00:17:53.259 14:58:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:53.259 14:58:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:53.259 14:58:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:53.259 14:58:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.259 14:58:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:53.259 14:58:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.259 14:58:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.259 14:58:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.786 14:58:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:55.786 00:17:55.786 real 0m9.820s 00:17:55.786 user 0m21.899s 00:17:55.786 sys 0m2.391s 00:17:55.786 14:58:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:55.786 14:58:40 -- common/autotest_common.sh@10 -- # set +x 00:17:55.786 ************************************ 00:17:55.786 END TEST nvmf_nmic 00:17:55.786 ************************************ 00:17:55.786 14:58:41 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:55.786 14:58:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:55.786 14:58:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:55.786 14:58:41 -- common/autotest_common.sh@10 -- # set +x 00:17:55.786 ************************************ 00:17:55.786 START TEST nvmf_fio_target 00:17:55.786 ************************************ 00:17:55.786 14:58:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:55.786 * Looking for test storage... 00:17:55.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.786 14:58:41 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.786 14:58:41 -- nvmf/common.sh@7 -- # uname -s 00:17:55.786 14:58:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.786 14:58:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.786 14:58:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.786 14:58:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.786 14:58:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.786 14:58:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.786 14:58:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.786 14:58:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.786 14:58:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.786 14:58:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.786 14:58:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:55.786 14:58:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:55.786 14:58:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.786 14:58:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.786 14:58:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.786 14:58:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.786 14:58:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.786 14:58:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.786 14:58:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.786 14:58:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.786 14:58:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.786 14:58:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.786 14:58:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.786 14:58:41 -- paths/export.sh@5 -- # export PATH 00:17:55.786 14:58:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.786 14:58:41 -- nvmf/common.sh@47 -- # : 0 00:17:55.786 14:58:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.786 14:58:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.786 14:58:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.786 14:58:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.786 14:58:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.786 14:58:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.786 14:58:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.786 14:58:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.786 14:58:41 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:55.786 14:58:41 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:55.786 14:58:41 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:55.786 14:58:41 -- target/fio.sh@16 -- # nvmftestinit 00:17:55.786 14:58:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:55.786 14:58:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.786 14:58:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:55.786 14:58:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:55.786 14:58:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:55.786 14:58:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.786 14:58:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.786 14:58:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.786 14:58:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:55.786 14:58:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:55.786 14:58:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:55.786 14:58:41 -- common/autotest_common.sh@10 -- # set +x 00:17:57.685 14:58:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:57.685 14:58:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:57.685 14:58:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:57.685 14:58:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:57.685 14:58:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:57.685 14:58:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:57.685 14:58:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:57.685 14:58:43 -- nvmf/common.sh@295 -- # net_devs=() 00:17:57.685 14:58:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:57.685 14:58:43 -- nvmf/common.sh@296 -- # e810=() 00:17:57.685 14:58:43 -- nvmf/common.sh@296 -- # local -ga e810 00:17:57.685 14:58:43 -- nvmf/common.sh@297 -- # x722=() 00:17:57.685 14:58:43 -- nvmf/common.sh@297 -- # local -ga x722 00:17:57.685 14:58:43 -- nvmf/common.sh@298 -- # mlx=() 00:17:57.685 14:58:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:57.685 14:58:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.685 14:58:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.685 14:58:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.685 14:58:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.685 14:58:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.685 14:58:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.685 14:58:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.685 14:58:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.685 14:58:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.685 14:58:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.685 14:58:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.685 14:58:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:57.685 14:58:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:57.685 14:58:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:57.685 14:58:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:57.685 14:58:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:57.685 14:58:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:57.685 14:58:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:57.685 14:58:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:57.685 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:57.685 14:58:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:57.685 14:58:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:57.685 14:58:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.685 14:58:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.685 14:58:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:57.685 14:58:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:57.685 14:58:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:57.685 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:57.685 14:58:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:57.685 14:58:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:57.685 14:58:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.685 14:58:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.685 14:58:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:57.685 14:58:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:57.685 14:58:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:57.685 14:58:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:57.685 14:58:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:57.685 14:58:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.685 14:58:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:57.685 14:58:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.686 14:58:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:57.686 Found net devices under 0000:84:00.0: cvl_0_0 00:17:57.686 14:58:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.686 14:58:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:57.686 14:58:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.686 14:58:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:57.686 14:58:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.686 14:58:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:57.686 Found net devices under 0000:84:00.1: cvl_0_1 00:17:57.686 14:58:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.686 14:58:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:57.686 14:58:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:57.686 14:58:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:57.686 14:58:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:57.686 14:58:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:57.686 14:58:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.686 14:58:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.686 14:58:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:57.686 14:58:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:57.686 14:58:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:57.686 14:58:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:57.686 14:58:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:57.686 14:58:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:57.686 14:58:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.686 14:58:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:57.686 14:58:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:57.686 14:58:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:57.686 14:58:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:57.686 14:58:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:57.686 14:58:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:57.686 14:58:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:57.686 14:58:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:57.686 14:58:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:57.686 14:58:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:57.686 14:58:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:57.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:17:57.686 00:17:57.686 --- 10.0.0.2 ping statistics --- 00:17:57.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.686 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:17:57.686 14:58:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:57.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:17:57.686 00:17:57.686 --- 10.0.0.1 ping statistics --- 00:17:57.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.686 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:17:57.686 14:58:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.686 14:58:43 -- nvmf/common.sh@411 -- # return 0 00:17:57.686 14:58:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:57.686 14:58:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.686 14:58:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:57.686 14:58:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:57.686 14:58:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.686 14:58:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:57.686 14:58:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:57.686 14:58:43 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:57.686 14:58:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:57.686 14:58:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:57.686 14:58:43 -- common/autotest_common.sh@10 -- # set +x 00:17:57.686 14:58:43 -- nvmf/common.sh@470 -- # nvmfpid=3782213 00:17:57.686 14:58:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:57.686 14:58:43 -- nvmf/common.sh@471 -- # waitforlisten 3782213 00:17:57.686 14:58:43 -- common/autotest_common.sh@817 -- # '[' -z 3782213 ']' 00:17:57.686 14:58:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.686 14:58:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:57.686 14:58:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.686 14:58:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:57.686 14:58:43 -- common/autotest_common.sh@10 -- # set +x 00:17:57.686 [2024-04-26 14:58:43.357488] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:17:57.686 [2024-04-26 14:58:43.357580] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.686 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.686 [2024-04-26 14:58:43.395160] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:57.686 [2024-04-26 14:58:43.421818] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:57.943 [2024-04-26 14:58:43.505775] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.943 [2024-04-26 14:58:43.505833] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.943 [2024-04-26 14:58:43.505862] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.943 [2024-04-26 14:58:43.505874] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.943 [2024-04-26 14:58:43.505884] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.943 [2024-04-26 14:58:43.505943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.943 [2024-04-26 14:58:43.506001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.943 [2024-04-26 14:58:43.506066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:57.943 [2024-04-26 14:58:43.506069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.943 14:58:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:57.943 14:58:43 -- common/autotest_common.sh@850 -- # return 0 00:17:57.943 14:58:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:57.943 14:58:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:57.943 14:58:43 -- common/autotest_common.sh@10 -- # set +x 00:17:57.943 14:58:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.943 14:58:43 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:58.200 [2024-04-26 14:58:43.850188] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.200 14:58:43 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:58.458 14:58:44 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:58.458 14:58:44 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:58.715 14:58:44 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:58.715 14:58:44 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:58.973 14:58:44 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:58.974 14:58:44 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:59.233 14:58:44 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:59.233 14:58:44 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:59.491 14:58:45 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:59.749 14:58:45 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:59.749 14:58:45 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.007 14:58:45 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:00.007 14:58:45 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.265 14:58:45 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:00.265 14:58:45 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:00.522 14:58:46 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:00.779 14:58:46 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:00.779 14:58:46 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:01.036 14:58:46 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:01.036 14:58:46 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:01.293 14:58:46 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:01.550 [2024-04-26 14:58:47.160606] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.550 14:58:47 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:01.808 14:58:47 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:02.065 14:58:47 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:02.677 14:58:48 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:02.677 14:58:48 -- common/autotest_common.sh@1184 -- # local i=0 00:18:02.677 14:58:48 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:02.677 14:58:48 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:18:02.677 14:58:48 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:18:02.677 14:58:48 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:04.573 14:58:50 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:04.573 14:58:50 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:04.573 14:58:50 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:04.830 14:58:50 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:18:04.830 14:58:50 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:04.830 14:58:50 -- common/autotest_common.sh@1194 -- # return 0 00:18:04.830 14:58:50 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:04.830 [global] 00:18:04.830 thread=1 00:18:04.830 invalidate=1 00:18:04.830 rw=write 00:18:04.830 time_based=1 00:18:04.830 runtime=1 00:18:04.830 ioengine=libaio 00:18:04.830 direct=1 00:18:04.830 bs=4096 00:18:04.830 iodepth=1 00:18:04.830 norandommap=0 00:18:04.830 numjobs=1 00:18:04.830 00:18:04.830 verify_dump=1 00:18:04.830 verify_backlog=512 00:18:04.830 verify_state_save=0 00:18:04.830 do_verify=1 00:18:04.830 verify=crc32c-intel 00:18:04.830 [job0] 00:18:04.830 filename=/dev/nvme0n1 00:18:04.830 [job1] 00:18:04.830 filename=/dev/nvme0n2 00:18:04.830 [job2] 00:18:04.830 filename=/dev/nvme0n3 00:18:04.830 [job3] 00:18:04.830 filename=/dev/nvme0n4 00:18:04.830 Could not set queue depth (nvme0n1) 00:18:04.830 Could not set queue depth (nvme0n2) 00:18:04.830 Could not set queue depth (nvme0n3) 00:18:04.831 Could not set queue depth (nvme0n4) 00:18:04.831 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:04.831 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:04.831 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:04.831 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:04.831 fio-3.35 00:18:04.831 Starting 4 threads 00:18:06.204 00:18:06.204 job0: (groupid=0, jobs=1): err= 0: pid=3783158: Fri Apr 26 14:58:51 2024 00:18:06.204 read: IOPS=277, BW=1110KiB/s (1137kB/s)(1156KiB/1041msec) 00:18:06.204 slat (nsec): min=7229, max=63876, avg=14321.43, stdev=7186.07 00:18:06.204 clat (usec): min=217, max=41977, avg=3132.00, stdev=10331.39 00:18:06.204 lat (usec): min=225, max=41992, avg=3146.32, stdev=10331.56 00:18:06.204 clat percentiles (usec): 00:18:06.204 | 1.00th=[ 225], 5.00th=[ 243], 10.00th=[ 253], 20.00th=[ 265], 00:18:06.204 | 30.00th=[ 281], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 322], 00:18:06.204 | 70.00th=[ 334], 80.00th=[ 392], 90.00th=[ 529], 95.00th=[41157], 00:18:06.204 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:18:06.204 | 99.99th=[42206] 00:18:06.204 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:18:06.204 slat (nsec): min=9678, max=85539, avg=13893.21, stdev=7989.73 00:18:06.204 clat (usec): min=154, max=358, avg=236.65, stdev=32.00 00:18:06.204 lat (usec): min=175, max=375, avg=250.54, stdev=33.99 00:18:06.204 clat percentiles (usec): 00:18:06.204 | 1.00th=[ 172], 5.00th=[ 188], 10.00th=[ 200], 20.00th=[ 215], 00:18:06.204 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 241], 00:18:06.204 | 70.00th=[ 249], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 302], 00:18:06.204 | 99.00th=[ 322], 99.50th=[ 343], 99.90th=[ 359], 99.95th=[ 359], 00:18:06.204 | 99.99th=[ 359] 00:18:06.204 bw ( KiB/s): min= 4096, max= 4096, per=27.63%, avg=4096.00, stdev= 0.00, samples=1 00:18:06.204 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:06.204 lat (usec) : 250=49.06%, 500=47.07%, 750=1.37% 00:18:06.204 lat (msec) : 50=2.50% 00:18:06.204 cpu : usr=0.77%, sys=1.25%, ctx=802, majf=0, minf=1 00:18:06.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:06.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.204 issued rwts: total=289,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:06.204 job1: (groupid=0, jobs=1): err= 0: pid=3783159: Fri Apr 26 14:58:51 2024 00:18:06.204 read: IOPS=49, BW=197KiB/s (202kB/s)(200KiB/1013msec) 00:18:06.204 slat (nsec): min=7641, max=25357, avg=12189.04, stdev=4596.55 00:18:06.204 clat (usec): min=252, max=42037, avg=18174.09, stdev=20298.20 00:18:06.204 lat (usec): min=278, max=42051, avg=18186.28, stdev=20298.59 00:18:06.204 clat percentiles (usec): 00:18:06.204 | 1.00th=[ 253], 5.00th=[ 310], 10.00th=[ 326], 20.00th=[ 338], 00:18:06.204 | 30.00th=[ 347], 40.00th=[ 367], 50.00th=[ 412], 60.00th=[40633], 00:18:06.204 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:06.204 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:06.204 | 99.99th=[42206] 00:18:06.204 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:18:06.204 slat (nsec): min=7246, max=39670, avg=9852.03, stdev=3910.83 00:18:06.204 clat (usec): min=151, max=382, avg=189.97, stdev=29.95 00:18:06.204 lat (usec): min=159, max=409, avg=199.82, stdev=31.70 00:18:06.204 clat percentiles (usec): 00:18:06.204 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:18:06.204 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 190], 00:18:06.204 | 70.00th=[ 196], 80.00th=[ 208], 90.00th=[ 227], 95.00th=[ 243], 00:18:06.204 | 99.00th=[ 306], 99.50th=[ 351], 99.90th=[ 383], 99.95th=[ 383], 00:18:06.204 | 99.99th=[ 383] 00:18:06.204 bw ( KiB/s): min= 4096, max= 4096, per=27.63%, avg=4096.00, stdev= 0.00, samples=1 00:18:06.204 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:06.204 lat (usec) : 250=87.19%, 500=8.54%, 750=0.36% 00:18:06.204 lat (msec) : 50=3.91% 00:18:06.204 cpu : usr=0.20%, sys=0.59%, ctx=562, majf=0, minf=1 00:18:06.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:06.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.204 issued rwts: total=50,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:06.204 job2: (groupid=0, jobs=1): err= 0: pid=3783160: Fri Apr 26 14:58:51 2024 00:18:06.204 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:06.204 slat (nsec): min=5294, max=24566, avg=7319.32, stdev=1909.02 00:18:06.204 clat (usec): min=207, max=42982, avg=748.72, stdev=4405.85 00:18:06.204 lat (usec): min=215, max=43002, avg=756.04, stdev=4406.72 00:18:06.204 clat percentiles (usec): 00:18:06.204 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 227], 20.00th=[ 235], 00:18:06.204 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 260], 00:18:06.204 | 70.00th=[ 269], 80.00th=[ 293], 90.00th=[ 318], 95.00th=[ 334], 00:18:06.204 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[42730], 00:18:06.204 | 99.99th=[42730] 00:18:06.204 write: IOPS=1032, BW=4132KiB/s (4231kB/s)(4136KiB/1001msec); 0 zone resets 00:18:06.204 slat (nsec): min=7400, max=48149, avg=11265.41, stdev=5087.86 00:18:06.204 clat (usec): min=155, max=455, avg=201.96, stdev=45.50 00:18:06.204 lat (usec): min=163, max=495, avg=213.22, stdev=48.91 00:18:06.204 clat percentiles (usec): 00:18:06.204 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 176], 00:18:06.204 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 196], 00:18:06.204 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 245], 95.00th=[ 310], 00:18:06.204 | 99.00th=[ 388], 99.50th=[ 420], 99.90th=[ 441], 99.95th=[ 457], 00:18:06.204 | 99.99th=[ 457] 00:18:06.204 bw ( KiB/s): min= 4096, max= 4096, per=27.63%, avg=4096.00, stdev= 0.00, samples=1 00:18:06.205 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:06.205 lat (usec) : 250=69.24%, 500=30.13% 00:18:06.205 lat (msec) : 10=0.05%, 50=0.58% 00:18:06.205 cpu : usr=1.80%, sys=2.10%, ctx=2058, majf=0, minf=2 00:18:06.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:06.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.205 issued rwts: total=1024,1034,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:06.205 job3: (groupid=0, jobs=1): err= 0: pid=3783161: Fri Apr 26 14:58:51 2024 00:18:06.205 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:06.205 slat (nsec): min=5658, max=49511, avg=8534.04, stdev=3285.17 00:18:06.205 clat (usec): min=204, max=41007, avg=399.15, stdev=2350.45 00:18:06.205 lat (usec): min=211, max=41022, avg=407.69, stdev=2350.86 00:18:06.205 clat percentiles (usec): 00:18:06.205 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 227], 00:18:06.205 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 255], 00:18:06.205 | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 322], 00:18:06.205 | 99.00th=[ 379], 99.50th=[ 420], 99.90th=[41157], 99.95th=[41157], 00:18:06.205 | 99.99th=[41157] 00:18:06.205 write: IOPS=1798, BW=7193KiB/s (7365kB/s)(7200KiB/1001msec); 0 zone resets 00:18:06.205 slat (usec): min=7, max=128, avg=11.46, stdev= 5.73 00:18:06.205 clat (usec): min=140, max=530, avg=191.36, stdev=38.01 00:18:06.205 lat (usec): min=148, max=541, avg=202.82, stdev=41.05 00:18:06.205 clat percentiles (usec): 00:18:06.205 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:18:06.205 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 188], 00:18:06.205 | 70.00th=[ 202], 80.00th=[ 223], 90.00th=[ 243], 95.00th=[ 262], 00:18:06.205 | 99.00th=[ 306], 99.50th=[ 330], 99.90th=[ 449], 99.95th=[ 529], 00:18:06.205 | 99.99th=[ 529] 00:18:06.205 bw ( KiB/s): min= 4712, max= 4712, per=31.79%, avg=4712.00, stdev= 0.00, samples=1 00:18:06.205 iops : min= 1178, max= 1178, avg=1178.00, stdev= 0.00, samples=1 00:18:06.205 lat (usec) : 250=75.42%, 500=24.37%, 750=0.03% 00:18:06.205 lat (msec) : 20=0.03%, 50=0.15% 00:18:06.205 cpu : usr=2.10%, sys=3.90%, ctx=3337, majf=0, minf=1 00:18:06.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:06.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.205 issued rwts: total=1536,1800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:06.205 00:18:06.205 Run status group 0 (all jobs): 00:18:06.205 READ: bw=10.9MiB/s (11.4MB/s), 197KiB/s-6138KiB/s (202kB/s-6285kB/s), io=11.3MiB (11.9MB), run=1001-1041msec 00:18:06.205 WRITE: bw=14.5MiB/s (15.2MB/s), 1967KiB/s-7193KiB/s (2015kB/s-7365kB/s), io=15.1MiB (15.8MB), run=1001-1041msec 00:18:06.205 00:18:06.205 Disk stats (read/write): 00:18:06.205 nvme0n1: ios=335/512, merge=0/0, ticks=1086/124, in_queue=1210, util=97.60% 00:18:06.205 nvme0n2: ios=50/512, merge=0/0, ticks=710/97, in_queue=807, util=86.46% 00:18:06.205 nvme0n3: ios=512/915, merge=0/0, ticks=640/179, in_queue=819, util=88.89% 00:18:06.205 nvme0n4: ios=1188/1536, merge=0/0, ticks=1002/278, in_queue=1280, util=97.78% 00:18:06.205 14:58:51 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:06.205 [global] 00:18:06.205 thread=1 00:18:06.205 invalidate=1 00:18:06.205 rw=randwrite 00:18:06.205 time_based=1 00:18:06.205 runtime=1 00:18:06.205 ioengine=libaio 00:18:06.205 direct=1 00:18:06.205 bs=4096 00:18:06.205 iodepth=1 00:18:06.205 norandommap=0 00:18:06.205 numjobs=1 00:18:06.205 00:18:06.205 verify_dump=1 00:18:06.205 verify_backlog=512 00:18:06.205 verify_state_save=0 00:18:06.205 do_verify=1 00:18:06.205 verify=crc32c-intel 00:18:06.205 [job0] 00:18:06.205 filename=/dev/nvme0n1 00:18:06.205 [job1] 00:18:06.205 filename=/dev/nvme0n2 00:18:06.205 [job2] 00:18:06.205 filename=/dev/nvme0n3 00:18:06.205 [job3] 00:18:06.205 filename=/dev/nvme0n4 00:18:06.205 Could not set queue depth (nvme0n1) 00:18:06.205 Could not set queue depth (nvme0n2) 00:18:06.205 Could not set queue depth (nvme0n3) 00:18:06.205 Could not set queue depth (nvme0n4) 00:18:06.462 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:06.462 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:06.462 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:06.462 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:06.462 fio-3.35 00:18:06.462 Starting 4 threads 00:18:07.836 00:18:07.836 job0: (groupid=0, jobs=1): err= 0: pid=3783387: Fri Apr 26 14:58:53 2024 00:18:07.836 read: IOPS=298, BW=1195KiB/s (1223kB/s)(1196KiB/1001msec) 00:18:07.836 slat (nsec): min=7228, max=55642, avg=18401.51, stdev=9386.54 00:18:07.836 clat (usec): min=291, max=41395, avg=2887.29, stdev=9533.36 00:18:07.836 lat (usec): min=300, max=41416, avg=2905.70, stdev=9534.85 00:18:07.836 clat percentiles (usec): 00:18:07.836 | 1.00th=[ 297], 5.00th=[ 322], 10.00th=[ 347], 20.00th=[ 420], 00:18:07.836 | 30.00th=[ 469], 40.00th=[ 486], 50.00th=[ 494], 60.00th=[ 506], 00:18:07.836 | 70.00th=[ 537], 80.00th=[ 570], 90.00th=[ 627], 95.00th=[41157], 00:18:07.836 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:07.836 | 99.99th=[41157] 00:18:07.836 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:07.836 slat (nsec): min=8487, max=67783, avg=12061.20, stdev=6877.60 00:18:07.836 clat (usec): min=159, max=1149, avg=239.19, stdev=72.21 00:18:07.836 lat (usec): min=168, max=1158, avg=251.26, stdev=72.46 00:18:07.836 clat percentiles (usec): 00:18:07.836 | 1.00th=[ 167], 5.00th=[ 182], 10.00th=[ 194], 20.00th=[ 204], 00:18:07.836 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 235], 00:18:07.836 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 293], 95.00th=[ 318], 00:18:07.836 | 99.00th=[ 392], 99.50th=[ 816], 99.90th=[ 1156], 99.95th=[ 1156], 00:18:07.836 | 99.99th=[ 1156] 00:18:07.836 bw ( KiB/s): min= 4096, max= 4096, per=17.85%, avg=4096.00, stdev= 0.00, samples=1 00:18:07.836 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:07.836 lat (usec) : 250=47.35%, 500=35.88%, 750=13.93%, 1000=0.49% 00:18:07.836 lat (msec) : 2=0.12%, 50=2.22% 00:18:07.836 cpu : usr=0.50%, sys=1.80%, ctx=814, majf=0, minf=2 00:18:07.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.836 issued rwts: total=299,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:07.836 job1: (groupid=0, jobs=1): err= 0: pid=3783389: Fri Apr 26 14:58:53 2024 00:18:07.836 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:07.836 slat (nsec): min=6872, max=84554, avg=12738.62, stdev=7869.66 00:18:07.836 clat (usec): min=219, max=735, avg=352.17, stdev=88.89 00:18:07.836 lat (usec): min=227, max=767, avg=364.91, stdev=94.68 00:18:07.837 clat percentiles (usec): 00:18:07.837 | 1.00th=[ 233], 5.00th=[ 247], 10.00th=[ 262], 20.00th=[ 285], 00:18:07.837 | 30.00th=[ 302], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 343], 00:18:07.837 | 70.00th=[ 367], 80.00th=[ 420], 90.00th=[ 498], 95.00th=[ 537], 00:18:07.837 | 99.00th=[ 603], 99.50th=[ 660], 99.90th=[ 725], 99.95th=[ 734], 00:18:07.837 | 99.99th=[ 734] 00:18:07.837 write: IOPS=1643, BW=6573KiB/s (6731kB/s)(6580KiB/1001msec); 0 zone resets 00:18:07.837 slat (nsec): min=7993, max=70179, avg=15213.74, stdev=8764.82 00:18:07.837 clat (usec): min=153, max=1166, avg=244.08, stdev=76.62 00:18:07.837 lat (usec): min=161, max=1176, avg=259.29, stdev=82.76 00:18:07.837 clat percentiles (usec): 00:18:07.837 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 182], 00:18:07.837 | 30.00th=[ 194], 40.00th=[ 208], 50.00th=[ 225], 60.00th=[ 243], 00:18:07.837 | 70.00th=[ 265], 80.00th=[ 297], 90.00th=[ 351], 95.00th=[ 392], 00:18:07.837 | 99.00th=[ 433], 99.50th=[ 445], 99.90th=[ 947], 99.95th=[ 1172], 00:18:07.837 | 99.99th=[ 1172] 00:18:07.837 bw ( KiB/s): min= 7168, max= 7168, per=31.23%, avg=7168.00, stdev= 0.00, samples=1 00:18:07.837 iops : min= 1792, max= 1792, avg=1792.00, stdev= 0.00, samples=1 00:18:07.837 lat (usec) : 250=35.87%, 500=59.23%, 750=4.81%, 1000=0.06% 00:18:07.837 lat (msec) : 2=0.03% 00:18:07.837 cpu : usr=4.10%, sys=5.40%, ctx=3181, majf=0, minf=1 00:18:07.837 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.837 issued rwts: total=1536,1645,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.837 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:07.837 job2: (groupid=0, jobs=1): err= 0: pid=3783405: Fri Apr 26 14:58:53 2024 00:18:07.837 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:07.837 slat (nsec): min=8038, max=61327, avg=13320.31, stdev=6498.31 00:18:07.837 clat (usec): min=226, max=1320, avg=369.16, stdev=82.94 00:18:07.837 lat (usec): min=235, max=1353, avg=382.48, stdev=86.28 00:18:07.837 clat percentiles (usec): 00:18:07.837 | 1.00th=[ 249], 5.00th=[ 273], 10.00th=[ 297], 20.00th=[ 306], 00:18:07.837 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 338], 60.00th=[ 367], 00:18:07.837 | 70.00th=[ 392], 80.00th=[ 453], 90.00th=[ 502], 95.00th=[ 523], 00:18:07.837 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 635], 99.95th=[ 1319], 00:18:07.837 | 99.99th=[ 1319] 00:18:07.837 write: IOPS=1536, BW=6146KiB/s (6293kB/s)(6152KiB/1001msec); 0 zone resets 00:18:07.837 slat (nsec): min=9357, max=65209, avg=16415.68, stdev=7693.34 00:18:07.837 clat (usec): min=167, max=1206, avg=243.61, stdev=60.40 00:18:07.837 lat (usec): min=177, max=1238, avg=260.03, stdev=64.32 00:18:07.837 clat percentiles (usec): 00:18:07.837 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 196], 00:18:07.837 | 30.00th=[ 206], 40.00th=[ 221], 50.00th=[ 243], 60.00th=[ 260], 00:18:07.837 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 310], 00:18:07.837 | 99.00th=[ 367], 99.50th=[ 404], 99.90th=[ 1004], 99.95th=[ 1205], 00:18:07.837 | 99.99th=[ 1205] 00:18:07.837 bw ( KiB/s): min= 8192, max= 8192, per=35.70%, avg=8192.00, stdev= 0.00, samples=1 00:18:07.837 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:07.837 lat (usec) : 250=27.52%, 500=67.21%, 750=5.14%, 1000=0.07% 00:18:07.837 lat (msec) : 2=0.07% 00:18:07.837 cpu : usr=2.60%, sys=6.80%, ctx=3076, majf=0, minf=1 00:18:07.837 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.837 issued rwts: total=1536,1538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.837 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:07.837 job3: (groupid=0, jobs=1): err= 0: pid=3783411: Fri Apr 26 14:58:53 2024 00:18:07.837 read: IOPS=1558, BW=6234KiB/s (6383kB/s)(6240KiB/1001msec) 00:18:07.837 slat (nsec): min=6130, max=26968, avg=7435.37, stdev=2080.96 00:18:07.837 clat (usec): min=243, max=793, avg=333.20, stdev=69.91 00:18:07.837 lat (usec): min=250, max=801, avg=340.64, stdev=70.21 00:18:07.837 clat percentiles (usec): 00:18:07.837 | 1.00th=[ 251], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 273], 00:18:07.837 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 306], 60.00th=[ 355], 00:18:07.837 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 433], 95.00th=[ 461], 00:18:07.837 | 99.00th=[ 523], 99.50th=[ 578], 99.90th=[ 766], 99.95th=[ 791], 00:18:07.837 | 99.99th=[ 791] 00:18:07.837 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:07.837 slat (usec): min=7, max=120, avg=10.34, stdev= 6.72 00:18:07.837 clat (usec): min=152, max=513, avg=213.80, stdev=45.73 00:18:07.837 lat (usec): min=165, max=544, avg=224.14, stdev=47.72 00:18:07.837 clat percentiles (usec): 00:18:07.837 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:18:07.837 | 30.00th=[ 186], 40.00th=[ 194], 50.00th=[ 204], 60.00th=[ 212], 00:18:07.837 | 70.00th=[ 225], 80.00th=[ 239], 90.00th=[ 260], 95.00th=[ 297], 00:18:07.837 | 99.00th=[ 404], 99.50th=[ 408], 99.90th=[ 478], 99.95th=[ 482], 00:18:07.837 | 99.99th=[ 515] 00:18:07.837 bw ( KiB/s): min= 8192, max= 8192, per=35.70%, avg=8192.00, stdev= 0.00, samples=1 00:18:07.837 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:07.837 lat (usec) : 250=49.67%, 500=49.39%, 750=0.89%, 1000=0.06% 00:18:07.837 cpu : usr=3.20%, sys=3.40%, ctx=3609, majf=0, minf=1 00:18:07.837 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.837 issued rwts: total=1560,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.837 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:07.837 00:18:07.837 Run status group 0 (all jobs): 00:18:07.837 READ: bw=19.2MiB/s (20.2MB/s), 1195KiB/s-6234KiB/s (1223kB/s-6383kB/s), io=19.3MiB (20.2MB), run=1001-1001msec 00:18:07.837 WRITE: bw=22.4MiB/s (23.5MB/s), 2046KiB/s-8184KiB/s (2095kB/s-8380kB/s), io=22.4MiB (23.5MB), run=1001-1001msec 00:18:07.837 00:18:07.837 Disk stats (read/write): 00:18:07.837 nvme0n1: ios=345/512, merge=0/0, ticks=1284/114, in_queue=1398, util=97.09% 00:18:07.837 nvme0n2: ios=1119/1536, merge=0/0, ticks=529/371, in_queue=900, util=90.00% 00:18:07.837 nvme0n3: ios=1109/1536, merge=0/0, ticks=860/359, in_queue=1219, util=97.27% 00:18:07.837 nvme0n4: ios=1362/1536, merge=0/0, ticks=458/316, in_queue=774, util=89.50% 00:18:07.837 14:58:53 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:07.837 [global] 00:18:07.837 thread=1 00:18:07.837 invalidate=1 00:18:07.837 rw=write 00:18:07.837 time_based=1 00:18:07.837 runtime=1 00:18:07.837 ioengine=libaio 00:18:07.837 direct=1 00:18:07.837 bs=4096 00:18:07.837 iodepth=128 00:18:07.837 norandommap=0 00:18:07.837 numjobs=1 00:18:07.837 00:18:07.837 verify_dump=1 00:18:07.837 verify_backlog=512 00:18:07.837 verify_state_save=0 00:18:07.837 do_verify=1 00:18:07.837 verify=crc32c-intel 00:18:07.837 [job0] 00:18:07.837 filename=/dev/nvme0n1 00:18:07.837 [job1] 00:18:07.837 filename=/dev/nvme0n2 00:18:07.837 [job2] 00:18:07.837 filename=/dev/nvme0n3 00:18:07.837 [job3] 00:18:07.837 filename=/dev/nvme0n4 00:18:07.837 Could not set queue depth (nvme0n1) 00:18:07.837 Could not set queue depth (nvme0n2) 00:18:07.837 Could not set queue depth (nvme0n3) 00:18:07.837 Could not set queue depth (nvme0n4) 00:18:07.837 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:07.837 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:07.837 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:07.837 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:07.837 fio-3.35 00:18:07.837 Starting 4 threads 00:18:09.212 00:18:09.212 job0: (groupid=0, jobs=1): err= 0: pid=3783742: Fri Apr 26 14:58:54 2024 00:18:09.212 read: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(21.9MiB/1003msec) 00:18:09.212 slat (usec): min=3, max=7072, avg=88.44, stdev=512.05 00:18:09.212 clat (usec): min=1404, max=18533, avg=11300.95, stdev=1687.53 00:18:09.212 lat (usec): min=3917, max=18576, avg=11389.39, stdev=1718.70 00:18:09.212 clat percentiles (usec): 00:18:09.212 | 1.00th=[ 6718], 5.00th=[ 8356], 10.00th=[ 9241], 20.00th=[10814], 00:18:09.212 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:18:09.212 | 70.00th=[11600], 80.00th=[12125], 90.00th=[13304], 95.00th=[14484], 00:18:09.212 | 99.00th=[16188], 99.50th=[16581], 99.90th=[16909], 99.95th=[17695], 00:18:09.212 | 99.99th=[18482] 00:18:09.212 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:18:09.212 slat (usec): min=5, max=5247, avg=79.43, stdev=401.32 00:18:09.212 clat (usec): min=5375, max=16435, avg=11265.33, stdev=1577.09 00:18:09.212 lat (usec): min=5383, max=16778, avg=11344.76, stdev=1572.57 00:18:09.212 clat percentiles (usec): 00:18:09.212 | 1.00th=[ 6718], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10552], 00:18:09.212 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:18:09.212 | 70.00th=[11731], 80.00th=[11863], 90.00th=[12911], 95.00th=[14484], 00:18:09.212 | 99.00th=[15664], 99.50th=[15926], 99.90th=[16450], 99.95th=[16450], 00:18:09.212 | 99.99th=[16450] 00:18:09.212 bw ( KiB/s): min=21640, max=23416, per=34.46%, avg=22528.00, stdev=1255.82, samples=2 00:18:09.212 iops : min= 5410, max= 5854, avg=5632.00, stdev=313.96, samples=2 00:18:09.212 lat (msec) : 2=0.01%, 4=0.09%, 10=13.85%, 20=86.05% 00:18:09.212 cpu : usr=6.79%, sys=11.18%, ctx=587, majf=0, minf=13 00:18:09.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:09.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:09.212 issued rwts: total=5604,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.212 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:09.212 job1: (groupid=0, jobs=1): err= 0: pid=3783743: Fri Apr 26 14:58:54 2024 00:18:09.212 read: IOPS=2512, BW=9.81MiB/s (10.3MB/s)(10.0MiB/1019msec) 00:18:09.212 slat (usec): min=3, max=21332, avg=192.75, stdev=1221.81 00:18:09.212 clat (usec): min=5335, max=77029, avg=20815.57, stdev=17929.08 00:18:09.212 lat (usec): min=5341, max=77045, avg=21008.32, stdev=18034.96 00:18:09.212 clat percentiles (usec): 00:18:09.212 | 1.00th=[ 6456], 5.00th=[10552], 10.00th=[11076], 20.00th=[11600], 00:18:09.212 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13173], 60.00th=[13698], 00:18:09.212 | 70.00th=[16057], 80.00th=[18744], 90.00th=[55313], 95.00th=[69731], 00:18:09.212 | 99.00th=[73925], 99.50th=[76022], 99.90th=[77071], 99.95th=[77071], 00:18:09.212 | 99.99th=[77071] 00:18:09.212 write: IOPS=2871, BW=11.2MiB/s (11.8MB/s)(11.4MiB/1019msec); 0 zone resets 00:18:09.212 slat (usec): min=5, max=46788, avg=163.05, stdev=1138.73 00:18:09.212 clat (usec): min=3218, max=77044, avg=22877.99, stdev=10316.71 00:18:09.212 lat (usec): min=3226, max=86377, avg=23041.04, stdev=10419.05 00:18:09.212 clat percentiles (usec): 00:18:09.212 | 1.00th=[ 4424], 5.00th=[ 8356], 10.00th=[12518], 20.00th=[15401], 00:18:09.212 | 30.00th=[17695], 40.00th=[22414], 50.00th=[23725], 60.00th=[23987], 00:18:09.212 | 70.00th=[24511], 80.00th=[25035], 90.00th=[32637], 95.00th=[43779], 00:18:09.212 | 99.00th=[62653], 99.50th=[65799], 99.90th=[74974], 99.95th=[77071], 00:18:09.212 | 99.99th=[77071] 00:18:09.212 bw ( KiB/s): min=10376, max=12016, per=17.13%, avg=11196.00, stdev=1159.66, samples=2 00:18:09.212 iops : min= 2594, max= 3004, avg=2799.00, stdev=289.91, samples=2 00:18:09.212 lat (msec) : 4=0.33%, 10=5.18%, 20=51.49%, 50=35.20%, 100=7.80% 00:18:09.212 cpu : usr=3.54%, sys=5.89%, ctx=353, majf=0, minf=15 00:18:09.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:09.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:09.212 issued rwts: total=2560,2926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.212 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:09.212 job2: (groupid=0, jobs=1): err= 0: pid=3783744: Fri Apr 26 14:58:54 2024 00:18:09.212 read: IOPS=4813, BW=18.8MiB/s (19.7MB/s)(19.0MiB/1008msec) 00:18:09.212 slat (usec): min=2, max=11664, avg=102.82, stdev=643.60 00:18:09.212 clat (usec): min=4016, max=24487, avg=12844.43, stdev=2412.83 00:18:09.212 lat (usec): min=4294, max=24590, avg=12947.25, stdev=2455.33 00:18:09.212 clat percentiles (usec): 00:18:09.212 | 1.00th=[ 8160], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11863], 00:18:09.212 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:18:09.212 | 70.00th=[12911], 80.00th=[13829], 90.00th=[15401], 95.00th=[17433], 00:18:09.212 | 99.00th=[22676], 99.50th=[23462], 99.90th=[24249], 99.95th=[24249], 00:18:09.212 | 99.99th=[24511] 00:18:09.212 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:18:09.212 slat (usec): min=4, max=15329, avg=91.82, stdev=518.84 00:18:09.212 clat (usec): min=1434, max=26419, avg=12731.63, stdev=2442.58 00:18:09.212 lat (usec): min=1459, max=26440, avg=12823.45, stdev=2474.02 00:18:09.212 clat percentiles (usec): 00:18:09.212 | 1.00th=[ 4817], 5.00th=[ 8586], 10.00th=[11207], 20.00th=[11994], 00:18:09.212 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:18:09.212 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13829], 95.00th=[16188], 00:18:09.212 | 99.00th=[25560], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:18:09.212 | 99.99th=[26346] 00:18:09.212 bw ( KiB/s): min=20480, max=20480, per=31.33%, avg=20480.00, stdev= 0.00, samples=2 00:18:09.212 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:18:09.212 lat (msec) : 2=0.03%, 4=0.28%, 10=6.58%, 20=91.05%, 50=2.06% 00:18:09.212 cpu : usr=4.57%, sys=5.96%, ctx=574, majf=0, minf=11 00:18:09.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:09.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:09.213 issued rwts: total=4852,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:09.213 job3: (groupid=0, jobs=1): err= 0: pid=3783746: Fri Apr 26 14:58:54 2024 00:18:09.213 read: IOPS=2512, BW=9.81MiB/s (10.3MB/s)(10.0MiB/1019msec) 00:18:09.213 slat (usec): min=3, max=16111, avg=154.57, stdev=1120.39 00:18:09.213 clat (usec): min=6145, max=47147, avg=18677.84, stdev=6579.08 00:18:09.213 lat (usec): min=6157, max=47193, avg=18832.41, stdev=6681.37 00:18:09.213 clat percentiles (usec): 00:18:09.213 | 1.00th=[10945], 5.00th=[11207], 10.00th=[12911], 20.00th=[13698], 00:18:09.213 | 30.00th=[14091], 40.00th=[14353], 50.00th=[15139], 60.00th=[19006], 00:18:09.213 | 70.00th=[20841], 80.00th=[24773], 90.00th=[29754], 95.00th=[32375], 00:18:09.213 | 99.00th=[33817], 99.50th=[34866], 99.90th=[41157], 99.95th=[45351], 00:18:09.213 | 99.99th=[46924] 00:18:09.213 write: IOPS=2918, BW=11.4MiB/s (12.0MB/s)(11.6MiB/1019msec); 0 zone resets 00:18:09.213 slat (usec): min=4, max=10893, avg=194.59, stdev=874.46 00:18:09.213 clat (usec): min=1325, max=61533, avg=27372.69, stdev=12298.35 00:18:09.213 lat (usec): min=1343, max=61542, avg=27567.28, stdev=12348.68 00:18:09.213 clat percentiles (usec): 00:18:09.213 | 1.00th=[ 8029], 5.00th=[11338], 10.00th=[12649], 20.00th=[19792], 00:18:09.213 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24511], 60.00th=[24773], 00:18:09.213 | 70.00th=[25822], 80.00th=[38011], 90.00th=[48497], 95.00th=[52167], 00:18:09.213 | 99.00th=[57410], 99.50th=[58459], 99.90th=[61604], 99.95th=[61604], 00:18:09.213 | 99.99th=[61604] 00:18:09.213 bw ( KiB/s): min=10488, max=12288, per=17.42%, avg=11388.00, stdev=1272.79, samples=2 00:18:09.213 iops : min= 2622, max= 3072, avg=2847.00, stdev=318.20, samples=2 00:18:09.213 lat (msec) : 2=0.05%, 4=0.31%, 10=1.41%, 20=38.74%, 50=55.24% 00:18:09.213 lat (msec) : 100=4.25% 00:18:09.213 cpu : usr=2.46%, sys=6.09%, ctx=360, majf=0, minf=11 00:18:09.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:09.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:09.213 issued rwts: total=2560,2974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:09.213 00:18:09.213 Run status group 0 (all jobs): 00:18:09.213 READ: bw=59.7MiB/s (62.6MB/s), 9.81MiB/s-21.8MiB/s (10.3MB/s-22.9MB/s), io=60.8MiB (63.8MB), run=1003-1019msec 00:18:09.213 WRITE: bw=63.8MiB/s (66.9MB/s), 11.2MiB/s-21.9MiB/s (11.8MB/s-23.0MB/s), io=65.0MiB (68.2MB), run=1003-1019msec 00:18:09.213 00:18:09.213 Disk stats (read/write): 00:18:09.213 nvme0n1: ios=4656/4903, merge=0/0, ticks=26120/25309, in_queue=51429, util=99.70% 00:18:09.213 nvme0n2: ios=2075/2503, merge=0/0, ticks=41003/54824, in_queue=95827, util=92.89% 00:18:09.213 nvme0n3: ios=4153/4335, merge=0/0, ticks=28778/27771, in_queue=56549, util=90.61% 00:18:09.213 nvme0n4: ios=2105/2559, merge=0/0, ticks=36922/67376, in_queue=104298, util=95.05% 00:18:09.213 14:58:54 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:09.213 [global] 00:18:09.213 thread=1 00:18:09.213 invalidate=1 00:18:09.213 rw=randwrite 00:18:09.213 time_based=1 00:18:09.213 runtime=1 00:18:09.213 ioengine=libaio 00:18:09.213 direct=1 00:18:09.213 bs=4096 00:18:09.213 iodepth=128 00:18:09.213 norandommap=0 00:18:09.213 numjobs=1 00:18:09.213 00:18:09.213 verify_dump=1 00:18:09.213 verify_backlog=512 00:18:09.213 verify_state_save=0 00:18:09.213 do_verify=1 00:18:09.213 verify=crc32c-intel 00:18:09.213 [job0] 00:18:09.213 filename=/dev/nvme0n1 00:18:09.213 [job1] 00:18:09.213 filename=/dev/nvme0n2 00:18:09.213 [job2] 00:18:09.213 filename=/dev/nvme0n3 00:18:09.213 [job3] 00:18:09.213 filename=/dev/nvme0n4 00:18:09.213 Could not set queue depth (nvme0n1) 00:18:09.213 Could not set queue depth (nvme0n2) 00:18:09.213 Could not set queue depth (nvme0n3) 00:18:09.213 Could not set queue depth (nvme0n4) 00:18:09.213 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:09.213 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:09.213 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:09.213 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:09.213 fio-3.35 00:18:09.213 Starting 4 threads 00:18:10.589 00:18:10.589 job0: (groupid=0, jobs=1): err= 0: pid=3783972: Fri Apr 26 14:58:56 2024 00:18:10.589 read: IOPS=4333, BW=16.9MiB/s (17.7MB/s)(17.1MiB/1009msec) 00:18:10.589 slat (usec): min=3, max=14523, avg=115.23, stdev=837.75 00:18:10.589 clat (usec): min=3928, max=39155, avg=14346.66, stdev=4812.18 00:18:10.589 lat (usec): min=4916, max=39163, avg=14461.89, stdev=4867.82 00:18:10.589 clat percentiles (usec): 00:18:10.589 | 1.00th=[ 6128], 5.00th=[10421], 10.00th=[10814], 20.00th=[11469], 00:18:10.589 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12518], 60.00th=[13173], 00:18:10.589 | 70.00th=[14877], 80.00th=[17433], 90.00th=[21103], 95.00th=[24511], 00:18:10.589 | 99.00th=[30278], 99.50th=[32375], 99.90th=[39060], 99.95th=[39060], 00:18:10.589 | 99.99th=[39060] 00:18:10.589 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:18:10.589 slat (usec): min=4, max=12602, avg=98.69, stdev=602.03 00:18:10.589 clat (usec): min=3055, max=55246, avg=14150.11, stdev=7477.51 00:18:10.589 lat (usec): min=3062, max=55257, avg=14248.80, stdev=7536.37 00:18:10.589 clat percentiles (usec): 00:18:10.589 | 1.00th=[ 3785], 5.00th=[ 7439], 10.00th=[ 8586], 20.00th=[10552], 00:18:10.589 | 30.00th=[11207], 40.00th=[11600], 50.00th=[12256], 60.00th=[12780], 00:18:10.589 | 70.00th=[13435], 80.00th=[16712], 90.00th=[22152], 95.00th=[26870], 00:18:10.589 | 99.00th=[49021], 99.50th=[51119], 99.90th=[55313], 99.95th=[55313], 00:18:10.589 | 99.99th=[55313] 00:18:10.589 bw ( KiB/s): min=18352, max=18512, per=26.08%, avg=18432.00, stdev=113.14, samples=2 00:18:10.589 iops : min= 4588, max= 4628, avg=4608.00, stdev=28.28, samples=2 00:18:10.589 lat (msec) : 4=0.55%, 10=9.55%, 20=77.77%, 50=11.63%, 100=0.50% 00:18:10.589 cpu : usr=4.56%, sys=8.13%, ctx=435, majf=0, minf=11 00:18:10.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:10.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.589 issued rwts: total=4372,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.589 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.589 job1: (groupid=0, jobs=1): err= 0: pid=3783973: Fri Apr 26 14:58:56 2024 00:18:10.589 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:18:10.589 slat (usec): min=2, max=21294, avg=118.74, stdev=775.11 00:18:10.589 clat (usec): min=6275, max=51730, avg=14827.16, stdev=6453.97 00:18:10.589 lat (usec): min=6281, max=51734, avg=14945.91, stdev=6496.70 00:18:10.589 clat percentiles (usec): 00:18:10.589 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[10683], 20.00th=[11600], 00:18:10.589 | 30.00th=[12256], 40.00th=[12911], 50.00th=[13173], 60.00th=[13698], 00:18:10.589 | 70.00th=[13960], 80.00th=[15270], 90.00th=[19792], 95.00th=[25822], 00:18:10.589 | 99.00th=[40633], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:18:10.589 | 99.99th=[51643] 00:18:10.589 write: IOPS=3998, BW=15.6MiB/s (16.4MB/s)(15.6MiB/1002msec); 0 zone resets 00:18:10.589 slat (usec): min=3, max=25467, avg=136.42, stdev=851.68 00:18:10.589 clat (usec): min=1055, max=82535, avg=18373.13, stdev=15609.52 00:18:10.589 lat (usec): min=1071, max=82542, avg=18509.55, stdev=15689.72 00:18:10.589 clat percentiles (usec): 00:18:10.589 | 1.00th=[ 4948], 5.00th=[ 8029], 10.00th=[ 9110], 20.00th=[10683], 00:18:10.589 | 30.00th=[11731], 40.00th=[12518], 50.00th=[12780], 60.00th=[13435], 00:18:10.589 | 70.00th=[14222], 80.00th=[21365], 90.00th=[37487], 95.00th=[62653], 00:18:10.589 | 99.00th=[79168], 99.50th=[80217], 99.90th=[82314], 99.95th=[82314], 00:18:10.589 | 99.99th=[82314] 00:18:10.589 bw ( KiB/s): min=14376, max=16656, per=21.95%, avg=15516.00, stdev=1612.20, samples=2 00:18:10.589 iops : min= 3594, max= 4164, avg=3879.00, stdev=403.05, samples=2 00:18:10.589 lat (msec) : 2=0.14%, 10=11.98%, 20=72.15%, 50=11.55%, 100=4.18% 00:18:10.589 cpu : usr=3.50%, sys=5.39%, ctx=370, majf=0, minf=17 00:18:10.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:10.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.589 issued rwts: total=3584,4006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.589 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.589 job2: (groupid=0, jobs=1): err= 0: pid=3783974: Fri Apr 26 14:58:56 2024 00:18:10.589 read: IOPS=4122, BW=16.1MiB/s (16.9MB/s)(16.1MiB/1001msec) 00:18:10.589 slat (usec): min=3, max=10854, avg=111.00, stdev=631.65 00:18:10.589 clat (usec): min=642, max=26019, avg=13815.00, stdev=2568.43 00:18:10.589 lat (usec): min=3558, max=27200, avg=13926.00, stdev=2608.52 00:18:10.589 clat percentiles (usec): 00:18:10.589 | 1.00th=[ 8717], 5.00th=[10552], 10.00th=[11600], 20.00th=[12387], 00:18:10.589 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13566], 60.00th=[13698], 00:18:10.589 | 70.00th=[14091], 80.00th=[15139], 90.00th=[16712], 95.00th=[18744], 00:18:10.589 | 99.00th=[23200], 99.50th=[24773], 99.90th=[26084], 99.95th=[26084], 00:18:10.589 | 99.99th=[26084] 00:18:10.589 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:18:10.589 slat (usec): min=4, max=16854, avg=108.39, stdev=530.67 00:18:10.589 clat (usec): min=5272, max=40517, avg=14828.02, stdev=5207.45 00:18:10.589 lat (usec): min=5282, max=40526, avg=14936.42, stdev=5236.86 00:18:10.589 clat percentiles (usec): 00:18:10.589 | 1.00th=[ 8160], 5.00th=[ 9634], 10.00th=[11338], 20.00th=[12649], 00:18:10.589 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13698], 60.00th=[14091], 00:18:10.589 | 70.00th=[14353], 80.00th=[14746], 90.00th=[19006], 95.00th=[26870], 00:18:10.589 | 99.00th=[37487], 99.50th=[38011], 99.90th=[40633], 99.95th=[40633], 00:18:10.589 | 99.99th=[40633] 00:18:10.589 bw ( KiB/s): min=19408, max=19408, per=27.46%, avg=19408.00, stdev= 0.00, samples=1 00:18:10.589 iops : min= 4852, max= 4852, avg=4852.00, stdev= 0.00, samples=1 00:18:10.589 lat (usec) : 750=0.01% 00:18:10.589 lat (msec) : 4=0.09%, 10=4.38%, 20=89.40%, 50=6.11% 00:18:10.589 cpu : usr=4.90%, sys=8.10%, ctx=573, majf=0, minf=13 00:18:10.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:10.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.589 issued rwts: total=4127,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.589 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.589 job3: (groupid=0, jobs=1): err= 0: pid=3783975: Fri Apr 26 14:58:56 2024 00:18:10.589 read: IOPS=4270, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1008msec) 00:18:10.589 slat (usec): min=2, max=10356, avg=110.35, stdev=587.74 00:18:10.589 clat (usec): min=7536, max=30930, avg=14528.42, stdev=2762.73 00:18:10.589 lat (usec): min=7558, max=30939, avg=14638.77, stdev=2776.69 00:18:10.589 clat percentiles (usec): 00:18:10.589 | 1.00th=[ 8586], 5.00th=[11207], 10.00th=[11863], 20.00th=[13042], 00:18:10.589 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:18:10.589 | 70.00th=[14746], 80.00th=[15270], 90.00th=[16909], 95.00th=[19792], 00:18:10.589 | 99.00th=[23462], 99.50th=[30540], 99.90th=[30802], 99.95th=[30802], 00:18:10.589 | 99.99th=[30802] 00:18:10.589 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:18:10.589 slat (usec): min=4, max=18016, avg=104.45, stdev=526.78 00:18:10.589 clat (usec): min=3088, max=30522, avg=14075.09, stdev=2228.97 00:18:10.589 lat (usec): min=3097, max=30537, avg=14179.54, stdev=2252.80 00:18:10.589 clat percentiles (usec): 00:18:10.589 | 1.00th=[ 6390], 5.00th=[11338], 10.00th=[12518], 20.00th=[13042], 00:18:10.589 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13960], 60.00th=[14222], 00:18:10.589 | 70.00th=[14353], 80.00th=[15008], 90.00th=[16581], 95.00th=[18220], 00:18:10.589 | 99.00th=[20579], 99.50th=[21627], 99.90th=[23200], 99.95th=[23987], 00:18:10.589 | 99.99th=[30540] 00:18:10.589 bw ( KiB/s): min=17392, max=19472, per=26.08%, avg=18432.00, stdev=1470.78, samples=2 00:18:10.589 iops : min= 4348, max= 4868, avg=4608.00, stdev=367.70, samples=2 00:18:10.589 lat (msec) : 4=0.17%, 10=1.88%, 20=93.67%, 50=4.27% 00:18:10.589 cpu : usr=5.06%, sys=8.14%, ctx=522, majf=0, minf=9 00:18:10.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:10.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.589 issued rwts: total=4305,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.589 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.589 00:18:10.589 Run status group 0 (all jobs): 00:18:10.589 READ: bw=63.4MiB/s (66.5MB/s), 14.0MiB/s-16.9MiB/s (14.7MB/s-17.7MB/s), io=64.0MiB (67.1MB), run=1001-1009msec 00:18:10.589 WRITE: bw=69.0MiB/s (72.4MB/s), 15.6MiB/s-18.0MiB/s (16.4MB/s-18.9MB/s), io=69.6MiB (73.0MB), run=1001-1009msec 00:18:10.589 00:18:10.589 Disk stats (read/write): 00:18:10.589 nvme0n1: ios=3624/4008, merge=0/0, ticks=47079/56476, in_queue=103555, util=98.10% 00:18:10.589 nvme0n2: ios=3117/3477, merge=0/0, ticks=15663/27784, in_queue=43447, util=98.68% 00:18:10.589 nvme0n3: ios=3584/3799, merge=0/0, ticks=23821/25755, in_queue=49576, util=88.83% 00:18:10.589 nvme0n4: ios=3638/3963, merge=0/0, ticks=17428/16855, in_queue=34283, util=98.63% 00:18:10.589 14:58:56 -- target/fio.sh@55 -- # sync 00:18:10.589 14:58:56 -- target/fio.sh@59 -- # fio_pid=3784107 00:18:10.589 14:58:56 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:10.589 14:58:56 -- target/fio.sh@61 -- # sleep 3 00:18:10.589 [global] 00:18:10.589 thread=1 00:18:10.589 invalidate=1 00:18:10.589 rw=read 00:18:10.589 time_based=1 00:18:10.589 runtime=10 00:18:10.589 ioengine=libaio 00:18:10.589 direct=1 00:18:10.589 bs=4096 00:18:10.589 iodepth=1 00:18:10.589 norandommap=1 00:18:10.589 numjobs=1 00:18:10.589 00:18:10.589 [job0] 00:18:10.589 filename=/dev/nvme0n1 00:18:10.589 [job1] 00:18:10.589 filename=/dev/nvme0n2 00:18:10.589 [job2] 00:18:10.589 filename=/dev/nvme0n3 00:18:10.589 [job3] 00:18:10.589 filename=/dev/nvme0n4 00:18:10.589 Could not set queue depth (nvme0n1) 00:18:10.589 Could not set queue depth (nvme0n2) 00:18:10.589 Could not set queue depth (nvme0n3) 00:18:10.589 Could not set queue depth (nvme0n4) 00:18:10.847 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:10.847 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:10.847 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:10.847 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:10.847 fio-3.35 00:18:10.847 Starting 4 threads 00:18:14.127 14:58:59 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:14.127 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=33665024, buflen=4096 00:18:14.127 fio: pid=3784208, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:14.127 14:58:59 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:14.127 14:58:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:14.127 14:58:59 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:14.127 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=23179264, buflen=4096 00:18:14.127 fio: pid=3784207, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:14.385 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=40267776, buflen=4096 00:18:14.385 fio: pid=3784205, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:14.385 14:59:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:14.385 14:59:00 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:14.644 14:59:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:14.644 14:59:00 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:14.644 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=28512256, buflen=4096 00:18:14.644 fio: pid=3784206, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:18:14.644 00:18:14.644 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3784205: Fri Apr 26 14:59:00 2024 00:18:14.644 read: IOPS=2834, BW=11.1MiB/s (11.6MB/s)(38.4MiB/3469msec) 00:18:14.644 slat (usec): min=6, max=15668, avg=13.29, stdev=218.58 00:18:14.644 clat (usec): min=200, max=42135, avg=337.34, stdev=1014.36 00:18:14.644 lat (usec): min=206, max=55013, avg=350.63, stdev=1089.33 00:18:14.644 clat percentiles (usec): 00:18:14.644 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 269], 00:18:14.644 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:18:14.644 | 70.00th=[ 310], 80.00th=[ 334], 90.00th=[ 408], 95.00th=[ 465], 00:18:14.644 | 99.00th=[ 537], 99.50th=[ 562], 99.90th=[ 3687], 99.95th=[41157], 00:18:14.644 | 99.99th=[42206] 00:18:14.644 bw ( KiB/s): min= 9992, max=14200, per=38.11%, avg=12306.67, stdev=1596.51, samples=6 00:18:14.644 iops : min= 2498, max= 3550, avg=3076.67, stdev=399.13, samples=6 00:18:14.644 lat (usec) : 250=7.81%, 500=89.98%, 750=2.04%, 1000=0.02% 00:18:14.644 lat (msec) : 2=0.02%, 4=0.03%, 10=0.02%, 50=0.06% 00:18:14.644 cpu : usr=1.27%, sys=3.14%, ctx=9837, majf=0, minf=1 00:18:14.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:14.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.644 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.644 issued rwts: total=9832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:14.644 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3784206: Fri Apr 26 14:59:00 2024 00:18:14.644 read: IOPS=1832, BW=7329KiB/s (7505kB/s)(27.2MiB/3799msec) 00:18:14.644 slat (usec): min=4, max=17800, avg=19.58, stdev=343.00 00:18:14.644 clat (usec): min=186, max=42328, avg=523.59, stdev=3204.72 00:18:14.644 lat (usec): min=192, max=42338, avg=543.17, stdev=3222.84 00:18:14.644 clat percentiles (usec): 00:18:14.644 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 225], 00:18:14.644 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 260], 00:18:14.644 | 70.00th=[ 281], 80.00th=[ 318], 90.00th=[ 351], 95.00th=[ 424], 00:18:14.644 | 99.00th=[ 562], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:18:14.644 | 99.99th=[42206] 00:18:14.644 bw ( KiB/s): min= 96, max=14938, per=21.64%, avg=6987.71, stdev=6673.06, samples=7 00:18:14.644 iops : min= 24, max= 3734, avg=1746.86, stdev=1668.17, samples=7 00:18:14.644 lat (usec) : 250=52.51%, 500=45.29%, 750=1.55% 00:18:14.644 lat (msec) : 4=0.01%, 50=0.62% 00:18:14.644 cpu : usr=0.66%, sys=2.84%, ctx=6968, majf=0, minf=1 00:18:14.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:14.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.644 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.644 issued rwts: total=6962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:14.644 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3784207: Fri Apr 26 14:59:00 2024 00:18:14.644 read: IOPS=1756, BW=7025KiB/s (7194kB/s)(22.1MiB/3222msec) 00:18:14.644 slat (nsec): min=5458, max=66775, avg=10896.66, stdev=7536.63 00:18:14.644 clat (usec): min=208, max=42000, avg=555.67, stdev=3153.12 00:18:14.644 lat (usec): min=216, max=42012, avg=566.57, stdev=3153.78 00:18:14.644 clat percentiles (usec): 00:18:14.644 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 243], 20.00th=[ 260], 00:18:14.644 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 302], 00:18:14.644 | 70.00th=[ 314], 80.00th=[ 338], 90.00th=[ 412], 95.00th=[ 469], 00:18:14.644 | 99.00th=[ 594], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:18:14.644 | 99.99th=[42206] 00:18:14.644 bw ( KiB/s): min= 96, max=13256, per=23.34%, avg=7537.33, stdev=6013.80, samples=6 00:18:14.644 iops : min= 24, max= 3314, avg=1884.33, stdev=1503.45, samples=6 00:18:14.644 lat (usec) : 250=14.82%, 500=82.35%, 750=2.10%, 1000=0.05% 00:18:14.644 lat (msec) : 2=0.02%, 4=0.04%, 50=0.60% 00:18:14.644 cpu : usr=0.87%, sys=2.70%, ctx=5663, majf=0, minf=1 00:18:14.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:14.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.644 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.644 issued rwts: total=5660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:14.644 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3784208: Fri Apr 26 14:59:00 2024 00:18:14.644 read: IOPS=2862, BW=11.2MiB/s (11.7MB/s)(32.1MiB/2872msec) 00:18:14.644 slat (nsec): min=4506, max=74208, avg=13149.94, stdev=8676.11 00:18:14.644 clat (usec): min=221, max=40752, avg=333.14, stdev=631.64 00:18:14.644 lat (usec): min=228, max=40784, avg=346.29, stdev=632.40 00:18:14.644 clat percentiles (usec): 00:18:14.644 | 1.00th=[ 247], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 273], 00:18:14.644 | 30.00th=[ 281], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 318], 00:18:14.644 | 70.00th=[ 330], 80.00th=[ 367], 90.00th=[ 429], 95.00th=[ 449], 00:18:14.644 | 99.00th=[ 486], 99.50th=[ 523], 99.90th=[ 668], 99.95th=[ 955], 00:18:14.644 | 99.99th=[40633] 00:18:14.644 bw ( KiB/s): min= 8928, max=12968, per=35.00%, avg=11304.00, stdev=1563.44, samples=5 00:18:14.644 iops : min= 2232, max= 3242, avg=2826.00, stdev=390.86, samples=5 00:18:14.644 lat (usec) : 250=1.68%, 500=97.60%, 750=0.61%, 1000=0.06% 00:18:14.644 lat (msec) : 2=0.01%, 50=0.02% 00:18:14.644 cpu : usr=1.57%, sys=4.39%, ctx=8220, majf=0, minf=1 00:18:14.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:14.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.644 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.644 issued rwts: total=8220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:14.644 00:18:14.644 Run status group 0 (all jobs): 00:18:14.644 READ: bw=31.5MiB/s (33.1MB/s), 7025KiB/s-11.2MiB/s (7194kB/s-11.7MB/s), io=120MiB (126MB), run=2872-3799msec 00:18:14.644 00:18:14.644 Disk stats (read/write): 00:18:14.644 nvme0n1: ios=9828/0, merge=0/0, ticks=3132/0, in_queue=3132, util=94.94% 00:18:14.644 nvme0n2: ios=6483/0, merge=0/0, ticks=3425/0, in_queue=3425, util=95.53% 00:18:14.644 nvme0n3: ios=5708/0, merge=0/0, ticks=4201/0, in_queue=4201, util=99.81% 00:18:14.644 nvme0n4: ios=8153/0, merge=0/0, ticks=2601/0, in_queue=2601, util=96.71% 00:18:14.902 14:59:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:14.902 14:59:00 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:15.160 14:59:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:15.160 14:59:00 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:15.418 14:59:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:15.418 14:59:01 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:15.677 14:59:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:15.677 14:59:01 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:15.935 14:59:01 -- target/fio.sh@69 -- # fio_status=0 00:18:15.935 14:59:01 -- target/fio.sh@70 -- # wait 3784107 00:18:15.935 14:59:01 -- target/fio.sh@70 -- # fio_status=4 00:18:15.935 14:59:01 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:16.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:16.193 14:59:01 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:16.193 14:59:01 -- common/autotest_common.sh@1205 -- # local i=0 00:18:16.193 14:59:01 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:16.193 14:59:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:16.193 14:59:01 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:16.193 14:59:01 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:16.193 14:59:01 -- common/autotest_common.sh@1217 -- # return 0 00:18:16.193 14:59:01 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:16.193 14:59:01 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:16.193 nvmf hotplug test: fio failed as expected 00:18:16.193 14:59:01 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:16.451 14:59:01 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:16.451 14:59:01 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:16.451 14:59:01 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:16.451 14:59:01 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:16.451 14:59:01 -- target/fio.sh@91 -- # nvmftestfini 00:18:16.451 14:59:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:16.451 14:59:01 -- nvmf/common.sh@117 -- # sync 00:18:16.451 14:59:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:16.451 14:59:01 -- nvmf/common.sh@120 -- # set +e 00:18:16.451 14:59:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:16.451 14:59:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:16.451 rmmod nvme_tcp 00:18:16.451 rmmod nvme_fabrics 00:18:16.451 rmmod nvme_keyring 00:18:16.451 14:59:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:16.451 14:59:01 -- nvmf/common.sh@124 -- # set -e 00:18:16.451 14:59:01 -- nvmf/common.sh@125 -- # return 0 00:18:16.451 14:59:01 -- nvmf/common.sh@478 -- # '[' -n 3782213 ']' 00:18:16.451 14:59:01 -- nvmf/common.sh@479 -- # killprocess 3782213 00:18:16.451 14:59:01 -- common/autotest_common.sh@936 -- # '[' -z 3782213 ']' 00:18:16.451 14:59:01 -- common/autotest_common.sh@940 -- # kill -0 3782213 00:18:16.451 14:59:02 -- common/autotest_common.sh@941 -- # uname 00:18:16.451 14:59:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:16.451 14:59:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3782213 00:18:16.451 14:59:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:16.451 14:59:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:16.451 14:59:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3782213' 00:18:16.451 killing process with pid 3782213 00:18:16.451 14:59:02 -- common/autotest_common.sh@955 -- # kill 3782213 00:18:16.451 14:59:02 -- common/autotest_common.sh@960 -- # wait 3782213 00:18:16.710 14:59:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:16.710 14:59:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:16.710 14:59:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:16.710 14:59:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:16.710 14:59:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:16.710 14:59:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.710 14:59:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.710 14:59:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.612 14:59:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:18.612 00:18:18.612 real 0m23.186s 00:18:18.612 user 1m21.384s 00:18:18.612 sys 0m7.081s 00:18:18.612 14:59:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:18.612 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:18:18.612 ************************************ 00:18:18.612 END TEST nvmf_fio_target 00:18:18.612 ************************************ 00:18:18.612 14:59:04 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:18.612 14:59:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:18.612 14:59:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:18.612 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:18:18.870 ************************************ 00:18:18.870 START TEST nvmf_bdevio 00:18:18.870 ************************************ 00:18:18.870 14:59:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:18.870 * Looking for test storage... 00:18:18.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:18.870 14:59:04 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:18.870 14:59:04 -- nvmf/common.sh@7 -- # uname -s 00:18:18.870 14:59:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.870 14:59:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.870 14:59:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.870 14:59:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.870 14:59:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.870 14:59:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.870 14:59:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.870 14:59:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.870 14:59:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.870 14:59:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.870 14:59:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:18.870 14:59:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:18.870 14:59:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.870 14:59:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.870 14:59:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:18.870 14:59:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.870 14:59:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:18.870 14:59:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.870 14:59:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.870 14:59:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.870 14:59:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.870 14:59:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.870 14:59:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.870 14:59:04 -- paths/export.sh@5 -- # export PATH 00:18:18.870 14:59:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.870 14:59:04 -- nvmf/common.sh@47 -- # : 0 00:18:18.871 14:59:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:18.871 14:59:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:18.871 14:59:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.871 14:59:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.871 14:59:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.871 14:59:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:18.871 14:59:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:18.871 14:59:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:18.871 14:59:04 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:18.871 14:59:04 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:18.871 14:59:04 -- target/bdevio.sh@14 -- # nvmftestinit 00:18:18.871 14:59:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:18.871 14:59:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.871 14:59:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:18.871 14:59:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:18.871 14:59:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:18.871 14:59:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.871 14:59:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.871 14:59:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.871 14:59:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:18.871 14:59:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:18.871 14:59:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:18.871 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:18:20.805 14:59:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:20.805 14:59:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:20.805 14:59:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:20.805 14:59:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:20.805 14:59:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:20.805 14:59:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:20.805 14:59:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:20.805 14:59:06 -- nvmf/common.sh@295 -- # net_devs=() 00:18:20.805 14:59:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:20.805 14:59:06 -- nvmf/common.sh@296 -- # e810=() 00:18:20.805 14:59:06 -- nvmf/common.sh@296 -- # local -ga e810 00:18:20.805 14:59:06 -- nvmf/common.sh@297 -- # x722=() 00:18:20.805 14:59:06 -- nvmf/common.sh@297 -- # local -ga x722 00:18:20.805 14:59:06 -- nvmf/common.sh@298 -- # mlx=() 00:18:20.805 14:59:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:20.805 14:59:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:20.805 14:59:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:20.805 14:59:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:20.805 14:59:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:20.805 14:59:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:20.805 14:59:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:20.805 14:59:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:20.805 14:59:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:20.805 14:59:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:20.805 14:59:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:20.805 14:59:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:20.805 14:59:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:20.805 14:59:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:20.805 14:59:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:20.805 14:59:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:20.805 14:59:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:20.805 14:59:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:20.805 14:59:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:20.805 14:59:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:20.805 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:20.805 14:59:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:20.805 14:59:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:20.805 14:59:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.805 14:59:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.805 14:59:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:20.805 14:59:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:20.805 14:59:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:20.805 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:20.805 14:59:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:20.805 14:59:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:20.805 14:59:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.805 14:59:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.805 14:59:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:20.805 14:59:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:20.805 14:59:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:20.805 14:59:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:20.805 14:59:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:20.805 14:59:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.805 14:59:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:20.805 14:59:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.805 14:59:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:20.805 Found net devices under 0000:84:00.0: cvl_0_0 00:18:20.805 14:59:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.805 14:59:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:20.805 14:59:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.805 14:59:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:20.805 14:59:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.805 14:59:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:20.805 Found net devices under 0000:84:00.1: cvl_0_1 00:18:20.805 14:59:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.805 14:59:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:20.805 14:59:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:20.805 14:59:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:20.805 14:59:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:20.805 14:59:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:20.805 14:59:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:20.805 14:59:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:20.805 14:59:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:20.805 14:59:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:20.805 14:59:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:20.805 14:59:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:20.805 14:59:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:20.805 14:59:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:20.805 14:59:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:20.805 14:59:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:20.805 14:59:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:20.805 14:59:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:20.805 14:59:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:21.064 14:59:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:21.064 14:59:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:21.064 14:59:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:21.064 14:59:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:21.064 14:59:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:21.064 14:59:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:21.064 14:59:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:21.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:21.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:18:21.064 00:18:21.064 --- 10.0.0.2 ping statistics --- 00:18:21.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.064 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:18:21.064 14:59:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:21.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:21.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:18:21.064 00:18:21.064 --- 10.0.0.1 ping statistics --- 00:18:21.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.064 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:18:21.064 14:59:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:21.064 14:59:06 -- nvmf/common.sh@411 -- # return 0 00:18:21.064 14:59:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:21.064 14:59:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:21.064 14:59:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:21.064 14:59:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:21.064 14:59:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:21.064 14:59:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:21.064 14:59:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:21.064 14:59:06 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:21.064 14:59:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:21.064 14:59:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:21.064 14:59:06 -- common/autotest_common.sh@10 -- # set +x 00:18:21.064 14:59:06 -- nvmf/common.sh@470 -- # nvmfpid=3786856 00:18:21.064 14:59:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:21.064 14:59:06 -- nvmf/common.sh@471 -- # waitforlisten 3786856 00:18:21.064 14:59:06 -- common/autotest_common.sh@817 -- # '[' -z 3786856 ']' 00:18:21.064 14:59:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.064 14:59:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:21.064 14:59:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.064 14:59:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:21.064 14:59:06 -- common/autotest_common.sh@10 -- # set +x 00:18:21.064 [2024-04-26 14:59:06.725288] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:18:21.064 [2024-04-26 14:59:06.725375] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.064 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.064 [2024-04-26 14:59:06.763522] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:21.064 [2024-04-26 14:59:06.794166] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:21.322 [2024-04-26 14:59:06.888863] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.322 [2024-04-26 14:59:06.888932] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.322 [2024-04-26 14:59:06.888949] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.322 [2024-04-26 14:59:06.888963] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.322 [2024-04-26 14:59:06.888976] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.322 [2024-04-26 14:59:06.889072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:21.322 [2024-04-26 14:59:06.889127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:21.322 [2024-04-26 14:59:06.889181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:21.322 [2024-04-26 14:59:06.889184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:21.322 14:59:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:21.322 14:59:07 -- common/autotest_common.sh@850 -- # return 0 00:18:21.322 14:59:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:21.322 14:59:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:21.322 14:59:07 -- common/autotest_common.sh@10 -- # set +x 00:18:21.322 14:59:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.322 14:59:07 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:21.322 14:59:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.322 14:59:07 -- common/autotest_common.sh@10 -- # set +x 00:18:21.322 [2024-04-26 14:59:07.045771] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.323 14:59:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.323 14:59:07 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:21.323 14:59:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.323 14:59:07 -- common/autotest_common.sh@10 -- # set +x 00:18:21.581 Malloc0 00:18:21.581 14:59:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.581 14:59:07 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:21.581 14:59:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.581 14:59:07 -- common/autotest_common.sh@10 -- # set +x 00:18:21.581 14:59:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.581 14:59:07 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:21.581 14:59:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.582 14:59:07 -- common/autotest_common.sh@10 -- # set +x 00:18:21.582 14:59:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.582 14:59:07 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.582 14:59:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.582 14:59:07 -- common/autotest_common.sh@10 -- # set +x 00:18:21.582 [2024-04-26 14:59:07.099923] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.582 14:59:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.582 14:59:07 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:21.582 14:59:07 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:21.582 14:59:07 -- nvmf/common.sh@521 -- # config=() 00:18:21.582 14:59:07 -- nvmf/common.sh@521 -- # local subsystem config 00:18:21.582 14:59:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:21.582 14:59:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:21.582 { 00:18:21.582 "params": { 00:18:21.582 "name": "Nvme$subsystem", 00:18:21.582 "trtype": "$TEST_TRANSPORT", 00:18:21.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:21.582 "adrfam": "ipv4", 00:18:21.582 "trsvcid": "$NVMF_PORT", 00:18:21.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:21.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:21.582 "hdgst": ${hdgst:-false}, 00:18:21.582 "ddgst": ${ddgst:-false} 00:18:21.582 }, 00:18:21.582 "method": "bdev_nvme_attach_controller" 00:18:21.582 } 00:18:21.582 EOF 00:18:21.582 )") 00:18:21.582 14:59:07 -- nvmf/common.sh@543 -- # cat 00:18:21.582 14:59:07 -- nvmf/common.sh@545 -- # jq . 00:18:21.582 14:59:07 -- nvmf/common.sh@546 -- # IFS=, 00:18:21.582 14:59:07 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:21.582 "params": { 00:18:21.582 "name": "Nvme1", 00:18:21.582 "trtype": "tcp", 00:18:21.582 "traddr": "10.0.0.2", 00:18:21.582 "adrfam": "ipv4", 00:18:21.582 "trsvcid": "4420", 00:18:21.582 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.582 "hdgst": false, 00:18:21.582 "ddgst": false 00:18:21.582 }, 00:18:21.582 "method": "bdev_nvme_attach_controller" 00:18:21.582 }' 00:18:21.582 [2024-04-26 14:59:07.145660] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:18:21.582 [2024-04-26 14:59:07.145729] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3786995 ] 00:18:21.582 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.582 [2024-04-26 14:59:07.178114] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:21.582 [2024-04-26 14:59:07.207772] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:21.582 [2024-04-26 14:59:07.296438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.582 [2024-04-26 14:59:07.296490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.582 [2024-04-26 14:59:07.296493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.840 I/O targets: 00:18:21.840 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:21.840 00:18:21.840 00:18:21.840 CUnit - A unit testing framework for C - Version 2.1-3 00:18:21.840 http://cunit.sourceforge.net/ 00:18:21.840 00:18:21.840 00:18:21.840 Suite: bdevio tests on: Nvme1n1 00:18:21.840 Test: blockdev write read block ...passed 00:18:21.840 Test: blockdev write zeroes read block ...passed 00:18:21.840 Test: blockdev write zeroes read no split ...passed 00:18:22.099 Test: blockdev write zeroes read split ...passed 00:18:22.099 Test: blockdev write zeroes read split partial ...passed 00:18:22.099 Test: blockdev reset ...[2024-04-26 14:59:07.688092] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:22.099 [2024-04-26 14:59:07.688220] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d7bf0 (9): Bad file descriptor 00:18:22.099 [2024-04-26 14:59:07.701830] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:22.099 passed 00:18:22.099 Test: blockdev write read 8 blocks ...passed 00:18:22.099 Test: blockdev write read size > 128k ...passed 00:18:22.099 Test: blockdev write read invalid size ...passed 00:18:22.099 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:22.099 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:22.099 Test: blockdev write read max offset ...passed 00:18:22.357 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:22.357 Test: blockdev writev readv 8 blocks ...passed 00:18:22.357 Test: blockdev writev readv 30 x 1block ...passed 00:18:22.357 Test: blockdev writev readv block ...passed 00:18:22.357 Test: blockdev writev readv size > 128k ...passed 00:18:22.357 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:22.357 Test: blockdev comparev and writev ...[2024-04-26 14:59:07.958814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.357 [2024-04-26 14:59:07.958863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.357 [2024-04-26 14:59:07.958901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.357 [2024-04-26 14:59:07.958930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:22.357 [2024-04-26 14:59:07.959354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.357 [2024-04-26 14:59:07.959382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:22.357 [2024-04-26 14:59:07.959417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.357 [2024-04-26 14:59:07.959444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:22.357 [2024-04-26 14:59:07.959848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.357 [2024-04-26 14:59:07.959875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:22.357 [2024-04-26 14:59:07.959910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.357 [2024-04-26 14:59:07.959937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:22.357 [2024-04-26 14:59:07.960324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.357 [2024-04-26 14:59:07.960351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:22.357 [2024-04-26 14:59:07.960386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.357 [2024-04-26 14:59:07.960413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:22.357 passed 00:18:22.357 Test: blockdev nvme passthru rw ...passed 00:18:22.357 Test: blockdev nvme passthru vendor specific ...[2024-04-26 14:59:08.042325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:22.357 [2024-04-26 14:59:08.042355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:22.357 [2024-04-26 14:59:08.042540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:22.357 [2024-04-26 14:59:08.042566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:22.357 [2024-04-26 14:59:08.042750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:22.357 [2024-04-26 14:59:08.042775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:22.357 [2024-04-26 14:59:08.042957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:22.357 [2024-04-26 14:59:08.042990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:22.357 passed 00:18:22.357 Test: blockdev nvme admin passthru ...passed 00:18:22.616 Test: blockdev copy ...passed 00:18:22.616 00:18:22.616 Run Summary: Type Total Ran Passed Failed Inactive 00:18:22.616 suites 1 1 n/a 0 0 00:18:22.616 tests 23 23 23 0 0 00:18:22.616 asserts 152 152 152 0 n/a 00:18:22.616 00:18:22.616 Elapsed time = 1.236 seconds 00:18:22.616 14:59:08 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:22.616 14:59:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.616 14:59:08 -- common/autotest_common.sh@10 -- # set +x 00:18:22.616 14:59:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.616 14:59:08 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:22.616 14:59:08 -- target/bdevio.sh@30 -- # nvmftestfini 00:18:22.616 14:59:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:22.616 14:59:08 -- nvmf/common.sh@117 -- # sync 00:18:22.616 14:59:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:22.616 14:59:08 -- nvmf/common.sh@120 -- # set +e 00:18:22.616 14:59:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:22.616 14:59:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:22.616 rmmod nvme_tcp 00:18:22.616 rmmod nvme_fabrics 00:18:22.616 rmmod nvme_keyring 00:18:22.616 14:59:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:22.616 14:59:08 -- nvmf/common.sh@124 -- # set -e 00:18:22.616 14:59:08 -- nvmf/common.sh@125 -- # return 0 00:18:22.616 14:59:08 -- nvmf/common.sh@478 -- # '[' -n 3786856 ']' 00:18:22.616 14:59:08 -- nvmf/common.sh@479 -- # killprocess 3786856 00:18:22.616 14:59:08 -- common/autotest_common.sh@936 -- # '[' -z 3786856 ']' 00:18:22.616 14:59:08 -- common/autotest_common.sh@940 -- # kill -0 3786856 00:18:22.616 14:59:08 -- common/autotest_common.sh@941 -- # uname 00:18:22.616 14:59:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:22.616 14:59:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3786856 00:18:22.616 14:59:08 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:18:22.616 14:59:08 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:18:22.616 14:59:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3786856' 00:18:22.616 killing process with pid 3786856 00:18:22.616 14:59:08 -- common/autotest_common.sh@955 -- # kill 3786856 00:18:22.616 14:59:08 -- common/autotest_common.sh@960 -- # wait 3786856 00:18:22.875 14:59:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:22.875 14:59:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:22.875 14:59:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:22.875 14:59:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:22.875 14:59:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:22.875 14:59:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.875 14:59:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.875 14:59:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.410 14:59:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:25.410 00:18:25.410 real 0m6.195s 00:18:25.410 user 0m9.592s 00:18:25.410 sys 0m2.068s 00:18:25.410 14:59:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:25.410 14:59:10 -- common/autotest_common.sh@10 -- # set +x 00:18:25.410 ************************************ 00:18:25.410 END TEST nvmf_bdevio 00:18:25.410 ************************************ 00:18:25.410 14:59:10 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:18:25.410 14:59:10 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:25.410 14:59:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:25.410 14:59:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:25.410 14:59:10 -- common/autotest_common.sh@10 -- # set +x 00:18:25.410 ************************************ 00:18:25.410 START TEST nvmf_bdevio_no_huge 00:18:25.410 ************************************ 00:18:25.410 14:59:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:25.410 * Looking for test storage... 00:18:25.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:25.410 14:59:10 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.410 14:59:10 -- nvmf/common.sh@7 -- # uname -s 00:18:25.410 14:59:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.410 14:59:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.410 14:59:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.410 14:59:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.410 14:59:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.410 14:59:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.410 14:59:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.410 14:59:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.410 14:59:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.410 14:59:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.410 14:59:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:25.410 14:59:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:25.410 14:59:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.410 14:59:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.410 14:59:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.410 14:59:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.410 14:59:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:25.410 14:59:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.410 14:59:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.410 14:59:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.410 14:59:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.410 14:59:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.410 14:59:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.410 14:59:10 -- paths/export.sh@5 -- # export PATH 00:18:25.410 14:59:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.410 14:59:10 -- nvmf/common.sh@47 -- # : 0 00:18:25.410 14:59:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:25.410 14:59:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:25.410 14:59:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.410 14:59:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.410 14:59:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.410 14:59:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:25.410 14:59:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:25.410 14:59:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:25.410 14:59:10 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:25.410 14:59:10 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:25.410 14:59:10 -- target/bdevio.sh@14 -- # nvmftestinit 00:18:25.410 14:59:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:25.410 14:59:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.410 14:59:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:25.410 14:59:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:25.410 14:59:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:25.410 14:59:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.410 14:59:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.410 14:59:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.410 14:59:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:25.410 14:59:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:25.410 14:59:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:25.410 14:59:10 -- common/autotest_common.sh@10 -- # set +x 00:18:27.311 14:59:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:27.311 14:59:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:27.311 14:59:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:27.311 14:59:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:27.311 14:59:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:27.311 14:59:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:27.311 14:59:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:27.311 14:59:12 -- nvmf/common.sh@295 -- # net_devs=() 00:18:27.311 14:59:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:27.311 14:59:12 -- nvmf/common.sh@296 -- # e810=() 00:18:27.311 14:59:12 -- nvmf/common.sh@296 -- # local -ga e810 00:18:27.311 14:59:12 -- nvmf/common.sh@297 -- # x722=() 00:18:27.311 14:59:12 -- nvmf/common.sh@297 -- # local -ga x722 00:18:27.311 14:59:12 -- nvmf/common.sh@298 -- # mlx=() 00:18:27.311 14:59:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:27.311 14:59:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.311 14:59:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.311 14:59:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.311 14:59:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.311 14:59:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.311 14:59:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.311 14:59:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.311 14:59:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.311 14:59:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.311 14:59:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.311 14:59:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.311 14:59:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:27.311 14:59:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:27.311 14:59:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:27.311 14:59:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:27.311 14:59:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:27.311 14:59:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:27.311 14:59:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.311 14:59:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:27.311 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:27.311 14:59:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:27.311 14:59:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:27.311 14:59:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.311 14:59:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.311 14:59:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:27.311 14:59:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.311 14:59:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:27.311 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:27.311 14:59:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:27.311 14:59:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:27.311 14:59:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.311 14:59:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.311 14:59:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:27.311 14:59:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:27.311 14:59:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:27.311 14:59:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:27.311 14:59:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.311 14:59:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.311 14:59:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:27.311 14:59:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.311 14:59:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:27.311 Found net devices under 0000:84:00.0: cvl_0_0 00:18:27.311 14:59:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.311 14:59:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.311 14:59:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.311 14:59:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:27.311 14:59:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.311 14:59:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:27.311 Found net devices under 0000:84:00.1: cvl_0_1 00:18:27.311 14:59:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.311 14:59:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:27.311 14:59:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:27.311 14:59:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:27.311 14:59:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:27.311 14:59:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:27.311 14:59:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:27.311 14:59:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:27.311 14:59:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:27.311 14:59:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:27.311 14:59:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:27.311 14:59:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:27.311 14:59:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:27.311 14:59:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:27.311 14:59:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:27.311 14:59:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:27.311 14:59:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:27.311 14:59:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:27.311 14:59:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:27.311 14:59:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:27.311 14:59:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:27.311 14:59:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:27.311 14:59:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:27.311 14:59:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:27.311 14:59:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:27.311 14:59:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:27.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:18:27.311 00:18:27.311 --- 10.0.0.2 ping statistics --- 00:18:27.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.311 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:18:27.311 14:59:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:27.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:18:27.311 00:18:27.311 --- 10.0.0.1 ping statistics --- 00:18:27.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.311 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:18:27.311 14:59:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.311 14:59:12 -- nvmf/common.sh@411 -- # return 0 00:18:27.311 14:59:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:27.311 14:59:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.311 14:59:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:27.312 14:59:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:27.312 14:59:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.312 14:59:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:27.312 14:59:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:27.312 14:59:12 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:27.312 14:59:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:27.312 14:59:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:27.312 14:59:12 -- common/autotest_common.sh@10 -- # set +x 00:18:27.312 14:59:12 -- nvmf/common.sh@470 -- # nvmfpid=3789088 00:18:27.312 14:59:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:27.312 14:59:12 -- nvmf/common.sh@471 -- # waitforlisten 3789088 00:18:27.312 14:59:12 -- common/autotest_common.sh@817 -- # '[' -z 3789088 ']' 00:18:27.312 14:59:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.312 14:59:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:27.312 14:59:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.312 14:59:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:27.312 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:18:27.312 [2024-04-26 14:59:13.044497] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:18:27.312 [2024-04-26 14:59:13.044604] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:27.570 [2024-04-26 14:59:13.094304] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:27.570 [2024-04-26 14:59:13.114047] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:27.570 [2024-04-26 14:59:13.200382] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.570 [2024-04-26 14:59:13.200453] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.570 [2024-04-26 14:59:13.200467] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.570 [2024-04-26 14:59:13.200479] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.570 [2024-04-26 14:59:13.200489] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.570 [2024-04-26 14:59:13.200588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:27.570 [2024-04-26 14:59:13.200635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:27.570 [2024-04-26 14:59:13.200667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:27.570 [2024-04-26 14:59:13.200669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:27.570 14:59:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:27.570 14:59:13 -- common/autotest_common.sh@850 -- # return 0 00:18:27.570 14:59:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:27.570 14:59:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:27.570 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:18:27.826 14:59:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.826 14:59:13 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:27.826 14:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.826 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:18:27.826 [2024-04-26 14:59:13.320487] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.826 14:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.826 14:59:13 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:27.826 14:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.826 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:18:27.826 Malloc0 00:18:27.826 14:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.826 14:59:13 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:27.826 14:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.826 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:18:27.826 14:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.826 14:59:13 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:27.826 14:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.826 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:18:27.826 14:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.826 14:59:13 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:27.826 14:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.826 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:18:27.826 [2024-04-26 14:59:13.358257] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.826 14:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.826 14:59:13 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:27.826 14:59:13 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:27.826 14:59:13 -- nvmf/common.sh@521 -- # config=() 00:18:27.826 14:59:13 -- nvmf/common.sh@521 -- # local subsystem config 00:18:27.826 14:59:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:27.826 14:59:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:27.826 { 00:18:27.826 "params": { 00:18:27.826 "name": "Nvme$subsystem", 00:18:27.826 "trtype": "$TEST_TRANSPORT", 00:18:27.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:27.826 "adrfam": "ipv4", 00:18:27.826 "trsvcid": "$NVMF_PORT", 00:18:27.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:27.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:27.826 "hdgst": ${hdgst:-false}, 00:18:27.826 "ddgst": ${ddgst:-false} 00:18:27.826 }, 00:18:27.826 "method": "bdev_nvme_attach_controller" 00:18:27.826 } 00:18:27.826 EOF 00:18:27.826 )") 00:18:27.826 14:59:13 -- nvmf/common.sh@543 -- # cat 00:18:27.826 14:59:13 -- nvmf/common.sh@545 -- # jq . 00:18:27.826 14:59:13 -- nvmf/common.sh@546 -- # IFS=, 00:18:27.826 14:59:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:27.826 "params": { 00:18:27.826 "name": "Nvme1", 00:18:27.826 "trtype": "tcp", 00:18:27.826 "traddr": "10.0.0.2", 00:18:27.826 "adrfam": "ipv4", 00:18:27.826 "trsvcid": "4420", 00:18:27.826 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.826 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:27.826 "hdgst": false, 00:18:27.826 "ddgst": false 00:18:27.826 }, 00:18:27.826 "method": "bdev_nvme_attach_controller" 00:18:27.826 }' 00:18:27.826 [2024-04-26 14:59:13.399809] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:18:27.826 [2024-04-26 14:59:13.399885] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3789116 ] 00:18:27.826 [2024-04-26 14:59:13.440860] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:27.826 [2024-04-26 14:59:13.460651] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:27.826 [2024-04-26 14:59:13.540928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.826 [2024-04-26 14:59:13.540982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.826 [2024-04-26 14:59:13.540985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.390 I/O targets: 00:18:28.390 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:28.390 00:18:28.390 00:18:28.390 CUnit - A unit testing framework for C - Version 2.1-3 00:18:28.390 http://cunit.sourceforge.net/ 00:18:28.390 00:18:28.390 00:18:28.390 Suite: bdevio tests on: Nvme1n1 00:18:28.390 Test: blockdev write read block ...passed 00:18:28.390 Test: blockdev write zeroes read block ...passed 00:18:28.390 Test: blockdev write zeroes read no split ...passed 00:18:28.390 Test: blockdev write zeroes read split ...passed 00:18:28.390 Test: blockdev write zeroes read split partial ...passed 00:18:28.390 Test: blockdev reset ...[2024-04-26 14:59:14.072464] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:28.390 [2024-04-26 14:59:14.072578] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165b470 (9): Bad file descriptor 00:18:28.647 [2024-04-26 14:59:14.133372] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:28.647 passed 00:18:28.647 Test: blockdev write read 8 blocks ...passed 00:18:28.647 Test: blockdev write read size > 128k ...passed 00:18:28.647 Test: blockdev write read invalid size ...passed 00:18:28.647 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:28.647 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:28.647 Test: blockdev write read max offset ...passed 00:18:28.647 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:28.647 Test: blockdev writev readv 8 blocks ...passed 00:18:28.647 Test: blockdev writev readv 30 x 1block ...passed 00:18:28.647 Test: blockdev writev readv block ...passed 00:18:28.647 Test: blockdev writev readv size > 128k ...passed 00:18:28.647 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:28.647 Test: blockdev comparev and writev ...[2024-04-26 14:59:14.308291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.647 [2024-04-26 14:59:14.308328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.647 [2024-04-26 14:59:14.308366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.647 [2024-04-26 14:59:14.308395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:28.647 [2024-04-26 14:59:14.308906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.647 [2024-04-26 14:59:14.308934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:28.647 [2024-04-26 14:59:14.308970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.647 [2024-04-26 14:59:14.308997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:28.647 [2024-04-26 14:59:14.309502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.647 [2024-04-26 14:59:14.309530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:28.647 [2024-04-26 14:59:14.309566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.647 [2024-04-26 14:59:14.309594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:28.647 [2024-04-26 14:59:14.310097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.647 [2024-04-26 14:59:14.310124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:28.647 [2024-04-26 14:59:14.310160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.647 [2024-04-26 14:59:14.310187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:28.647 passed 00:18:28.905 Test: blockdev nvme passthru rw ...passed 00:18:28.905 Test: blockdev nvme passthru vendor specific ...[2024-04-26 14:59:14.394530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:28.905 [2024-04-26 14:59:14.394559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:28.905 [2024-04-26 14:59:14.394889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:28.905 [2024-04-26 14:59:14.394916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:28.905 [2024-04-26 14:59:14.395169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:28.905 [2024-04-26 14:59:14.395195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:28.905 [2024-04-26 14:59:14.395410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:28.905 [2024-04-26 14:59:14.395435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:28.905 passed 00:18:28.905 Test: blockdev nvme admin passthru ...passed 00:18:28.905 Test: blockdev copy ...passed 00:18:28.905 00:18:28.905 Run Summary: Type Total Ran Passed Failed Inactive 00:18:28.905 suites 1 1 n/a 0 0 00:18:28.905 tests 23 23 23 0 0 00:18:28.905 asserts 152 152 152 0 n/a 00:18:28.905 00:18:28.905 Elapsed time = 1.201 seconds 00:18:29.162 14:59:14 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:29.162 14:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:29.162 14:59:14 -- common/autotest_common.sh@10 -- # set +x 00:18:29.162 14:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:29.162 14:59:14 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:29.162 14:59:14 -- target/bdevio.sh@30 -- # nvmftestfini 00:18:29.162 14:59:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:29.162 14:59:14 -- nvmf/common.sh@117 -- # sync 00:18:29.162 14:59:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:29.162 14:59:14 -- nvmf/common.sh@120 -- # set +e 00:18:29.162 14:59:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:29.162 14:59:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:29.162 rmmod nvme_tcp 00:18:29.162 rmmod nvme_fabrics 00:18:29.162 rmmod nvme_keyring 00:18:29.162 14:59:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:29.162 14:59:14 -- nvmf/common.sh@124 -- # set -e 00:18:29.163 14:59:14 -- nvmf/common.sh@125 -- # return 0 00:18:29.163 14:59:14 -- nvmf/common.sh@478 -- # '[' -n 3789088 ']' 00:18:29.163 14:59:14 -- nvmf/common.sh@479 -- # killprocess 3789088 00:18:29.163 14:59:14 -- common/autotest_common.sh@936 -- # '[' -z 3789088 ']' 00:18:29.163 14:59:14 -- common/autotest_common.sh@940 -- # kill -0 3789088 00:18:29.163 14:59:14 -- common/autotest_common.sh@941 -- # uname 00:18:29.163 14:59:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:29.163 14:59:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3789088 00:18:29.163 14:59:14 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:18:29.163 14:59:14 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:18:29.163 14:59:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3789088' 00:18:29.163 killing process with pid 3789088 00:18:29.163 14:59:14 -- common/autotest_common.sh@955 -- # kill 3789088 00:18:29.163 14:59:14 -- common/autotest_common.sh@960 -- # wait 3789088 00:18:29.789 14:59:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:29.789 14:59:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:29.789 14:59:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:29.789 14:59:15 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:29.789 14:59:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:29.789 14:59:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.789 14:59:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.789 14:59:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.690 14:59:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:31.690 00:18:31.690 real 0m6.516s 00:18:31.690 user 0m11.020s 00:18:31.690 sys 0m2.512s 00:18:31.690 14:59:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:31.690 14:59:17 -- common/autotest_common.sh@10 -- # set +x 00:18:31.690 ************************************ 00:18:31.690 END TEST nvmf_bdevio_no_huge 00:18:31.690 ************************************ 00:18:31.690 14:59:17 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:31.690 14:59:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:31.690 14:59:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:31.690 14:59:17 -- common/autotest_common.sh@10 -- # set +x 00:18:31.690 ************************************ 00:18:31.690 START TEST nvmf_tls 00:18:31.690 ************************************ 00:18:31.690 14:59:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:31.949 * Looking for test storage... 00:18:31.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:31.949 14:59:17 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.949 14:59:17 -- nvmf/common.sh@7 -- # uname -s 00:18:31.949 14:59:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.949 14:59:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.949 14:59:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.949 14:59:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.949 14:59:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.949 14:59:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.949 14:59:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.949 14:59:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.949 14:59:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.949 14:59:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.949 14:59:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:31.949 14:59:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:31.949 14:59:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.949 14:59:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.949 14:59:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.949 14:59:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.949 14:59:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.949 14:59:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.949 14:59:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.949 14:59:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.949 14:59:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.949 14:59:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.949 14:59:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.949 14:59:17 -- paths/export.sh@5 -- # export PATH 00:18:31.949 14:59:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.949 14:59:17 -- nvmf/common.sh@47 -- # : 0 00:18:31.949 14:59:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:31.949 14:59:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:31.949 14:59:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.949 14:59:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.949 14:59:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.949 14:59:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:31.949 14:59:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:31.949 14:59:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:31.949 14:59:17 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:31.949 14:59:17 -- target/tls.sh@62 -- # nvmftestinit 00:18:31.949 14:59:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:31.949 14:59:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.949 14:59:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:31.949 14:59:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:31.949 14:59:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:31.949 14:59:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.949 14:59:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.949 14:59:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.949 14:59:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:31.949 14:59:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:31.949 14:59:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:31.949 14:59:17 -- common/autotest_common.sh@10 -- # set +x 00:18:33.852 14:59:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:33.852 14:59:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:33.852 14:59:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:33.852 14:59:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:33.852 14:59:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:33.852 14:59:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:33.852 14:59:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:33.852 14:59:19 -- nvmf/common.sh@295 -- # net_devs=() 00:18:33.852 14:59:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:33.852 14:59:19 -- nvmf/common.sh@296 -- # e810=() 00:18:33.852 14:59:19 -- nvmf/common.sh@296 -- # local -ga e810 00:18:33.852 14:59:19 -- nvmf/common.sh@297 -- # x722=() 00:18:33.852 14:59:19 -- nvmf/common.sh@297 -- # local -ga x722 00:18:33.852 14:59:19 -- nvmf/common.sh@298 -- # mlx=() 00:18:33.852 14:59:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:33.852 14:59:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:33.852 14:59:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:33.852 14:59:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:33.852 14:59:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:33.852 14:59:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:33.852 14:59:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:33.852 14:59:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:33.852 14:59:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:33.852 14:59:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:33.852 14:59:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:33.852 14:59:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:33.852 14:59:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:33.852 14:59:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:33.852 14:59:19 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:33.852 14:59:19 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:33.852 14:59:19 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:33.852 14:59:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:33.852 14:59:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.852 14:59:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:33.852 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:33.852 14:59:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.852 14:59:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.852 14:59:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.852 14:59:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.852 14:59:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.852 14:59:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.852 14:59:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:33.852 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:33.852 14:59:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.852 14:59:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.852 14:59:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.852 14:59:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.852 14:59:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.852 14:59:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:33.852 14:59:19 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:33.852 14:59:19 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:33.852 14:59:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.852 14:59:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.852 14:59:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:33.852 14:59:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.852 14:59:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:33.852 Found net devices under 0000:84:00.0: cvl_0_0 00:18:33.853 14:59:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.853 14:59:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.853 14:59:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.853 14:59:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:33.853 14:59:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.853 14:59:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:33.853 Found net devices under 0000:84:00.1: cvl_0_1 00:18:33.853 14:59:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.853 14:59:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:33.853 14:59:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:33.853 14:59:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:33.853 14:59:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:33.853 14:59:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:33.853 14:59:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.853 14:59:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.853 14:59:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:33.853 14:59:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:33.853 14:59:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:33.853 14:59:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:33.853 14:59:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:33.853 14:59:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:33.853 14:59:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.853 14:59:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:33.853 14:59:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:33.853 14:59:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:33.853 14:59:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:33.853 14:59:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:33.853 14:59:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:33.853 14:59:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:33.853 14:59:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:33.853 14:59:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:33.853 14:59:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:33.853 14:59:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:33.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:33.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:18:33.853 00:18:33.853 --- 10.0.0.2 ping statistics --- 00:18:33.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.853 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:18:33.853 14:59:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:33.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:33.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:18:33.853 00:18:33.853 --- 10.0.0.1 ping statistics --- 00:18:33.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.853 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:18:33.853 14:59:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:33.853 14:59:19 -- nvmf/common.sh@411 -- # return 0 00:18:33.853 14:59:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:33.853 14:59:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:33.853 14:59:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:33.853 14:59:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:33.853 14:59:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:33.853 14:59:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:33.853 14:59:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:33.853 14:59:19 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:33.853 14:59:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:33.853 14:59:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:33.853 14:59:19 -- common/autotest_common.sh@10 -- # set +x 00:18:33.853 14:59:19 -- nvmf/common.sh@470 -- # nvmfpid=3791320 00:18:33.853 14:59:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:33.853 14:59:19 -- nvmf/common.sh@471 -- # waitforlisten 3791320 00:18:33.853 14:59:19 -- common/autotest_common.sh@817 -- # '[' -z 3791320 ']' 00:18:33.853 14:59:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.853 14:59:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:33.853 14:59:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.853 14:59:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:33.853 14:59:19 -- common/autotest_common.sh@10 -- # set +x 00:18:33.853 [2024-04-26 14:59:19.525630] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:18:33.853 [2024-04-26 14:59:19.525713] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.853 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.853 [2024-04-26 14:59:19.566325] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:34.111 [2024-04-26 14:59:19.593472] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.111 [2024-04-26 14:59:19.680475] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.111 [2024-04-26 14:59:19.680545] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.111 [2024-04-26 14:59:19.680573] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.111 [2024-04-26 14:59:19.680585] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.111 [2024-04-26 14:59:19.680595] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.111 [2024-04-26 14:59:19.680625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.111 14:59:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:34.111 14:59:19 -- common/autotest_common.sh@850 -- # return 0 00:18:34.111 14:59:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:34.111 14:59:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:34.111 14:59:19 -- common/autotest_common.sh@10 -- # set +x 00:18:34.111 14:59:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.111 14:59:19 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:34.111 14:59:19 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:34.369 true 00:18:34.369 14:59:20 -- target/tls.sh@73 -- # jq -r .tls_version 00:18:34.369 14:59:20 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:34.627 14:59:20 -- target/tls.sh@73 -- # version=0 00:18:34.627 14:59:20 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:34.627 14:59:20 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:34.885 14:59:20 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:34.885 14:59:20 -- target/tls.sh@81 -- # jq -r .tls_version 00:18:35.180 14:59:20 -- target/tls.sh@81 -- # version=13 00:18:35.180 14:59:20 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:35.180 14:59:20 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:35.439 14:59:20 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:35.439 14:59:20 -- target/tls.sh@89 -- # jq -r .tls_version 00:18:35.696 14:59:21 -- target/tls.sh@89 -- # version=7 00:18:35.696 14:59:21 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:35.696 14:59:21 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:35.696 14:59:21 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:35.954 14:59:21 -- target/tls.sh@96 -- # ktls=false 00:18:35.954 14:59:21 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:35.954 14:59:21 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:36.211 14:59:21 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:36.211 14:59:21 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:36.468 14:59:21 -- target/tls.sh@104 -- # ktls=true 00:18:36.468 14:59:21 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:36.468 14:59:21 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:36.726 14:59:22 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:36.726 14:59:22 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:36.984 14:59:22 -- target/tls.sh@112 -- # ktls=false 00:18:36.984 14:59:22 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:36.984 14:59:22 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:36.984 14:59:22 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:36.984 14:59:22 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:36.984 14:59:22 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:18:36.984 14:59:22 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:18:36.984 14:59:22 -- nvmf/common.sh@693 -- # digest=1 00:18:36.984 14:59:22 -- nvmf/common.sh@694 -- # python - 00:18:36.984 14:59:22 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:36.984 14:59:22 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:36.984 14:59:22 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:36.984 14:59:22 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:36.984 14:59:22 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:18:36.984 14:59:22 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:18:36.984 14:59:22 -- nvmf/common.sh@693 -- # digest=1 00:18:36.984 14:59:22 -- nvmf/common.sh@694 -- # python - 00:18:36.984 14:59:22 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:36.984 14:59:22 -- target/tls.sh@121 -- # mktemp 00:18:36.984 14:59:22 -- target/tls.sh@121 -- # key_path=/tmp/tmp.FT9gS7BfMH 00:18:36.984 14:59:22 -- target/tls.sh@122 -- # mktemp 00:18:36.984 14:59:22 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.tcl6peEyfL 00:18:36.984 14:59:22 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:36.984 14:59:22 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:36.984 14:59:22 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.FT9gS7BfMH 00:18:36.984 14:59:22 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.tcl6peEyfL 00:18:36.984 14:59:22 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:37.242 14:59:22 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:37.500 14:59:23 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.FT9gS7BfMH 00:18:37.500 14:59:23 -- target/tls.sh@49 -- # local key=/tmp/tmp.FT9gS7BfMH 00:18:37.500 14:59:23 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:37.757 [2024-04-26 14:59:23.397742] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.757 14:59:23 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:38.014 14:59:23 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:38.272 [2024-04-26 14:59:23.919171] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:38.272 [2024-04-26 14:59:23.919454] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.272 14:59:23 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:38.531 malloc0 00:18:38.531 14:59:24 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:38.789 14:59:24 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FT9gS7BfMH 00:18:39.047 [2024-04-26 14:59:24.657995] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:39.048 14:59:24 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.FT9gS7BfMH 00:18:39.048 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.243 Initializing NVMe Controllers 00:18:51.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:51.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:51.243 Initialization complete. Launching workers. 00:18:51.243 ======================================================== 00:18:51.243 Latency(us) 00:18:51.243 Device Information : IOPS MiB/s Average min max 00:18:51.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7317.39 28.58 8749.21 1320.73 10012.75 00:18:51.243 ======================================================== 00:18:51.243 Total : 7317.39 28.58 8749.21 1320.73 10012.75 00:18:51.243 00:18:51.243 14:59:34 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FT9gS7BfMH 00:18:51.243 14:59:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:51.243 14:59:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:51.243 14:59:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:51.243 14:59:34 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FT9gS7BfMH' 00:18:51.243 14:59:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:51.243 14:59:34 -- target/tls.sh@28 -- # bdevperf_pid=3793096 00:18:51.243 14:59:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:51.243 14:59:34 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:51.243 14:59:34 -- target/tls.sh@31 -- # waitforlisten 3793096 /var/tmp/bdevperf.sock 00:18:51.243 14:59:34 -- common/autotest_common.sh@817 -- # '[' -z 3793096 ']' 00:18:51.243 14:59:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.243 14:59:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:51.243 14:59:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.243 14:59:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:51.243 14:59:34 -- common/autotest_common.sh@10 -- # set +x 00:18:51.243 [2024-04-26 14:59:34.823377] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:18:51.243 [2024-04-26 14:59:34.823461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3793096 ] 00:18:51.243 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.243 [2024-04-26 14:59:34.855684] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:51.243 [2024-04-26 14:59:34.884321] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.243 [2024-04-26 14:59:34.972688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.243 14:59:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:51.243 14:59:35 -- common/autotest_common.sh@850 -- # return 0 00:18:51.243 14:59:35 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FT9gS7BfMH 00:18:51.243 [2024-04-26 14:59:35.343836] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:51.243 [2024-04-26 14:59:35.343956] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:51.243 TLSTESTn1 00:18:51.243 14:59:35 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:51.243 Running I/O for 10 seconds... 00:19:01.214 00:19:01.214 Latency(us) 00:19:01.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.214 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:01.214 Verification LBA range: start 0x0 length 0x2000 00:19:01.214 TLSTESTn1 : 10.02 3497.19 13.66 0.00 0.00 36536.08 6043.88 53593.88 00:19:01.214 =================================================================================================================== 00:19:01.214 Total : 3497.19 13.66 0.00 0.00 36536.08 6043.88 53593.88 00:19:01.214 0 00:19:01.214 14:59:45 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:01.214 14:59:45 -- target/tls.sh@45 -- # killprocess 3793096 00:19:01.214 14:59:45 -- common/autotest_common.sh@936 -- # '[' -z 3793096 ']' 00:19:01.214 14:59:45 -- common/autotest_common.sh@940 -- # kill -0 3793096 00:19:01.214 14:59:45 -- common/autotest_common.sh@941 -- # uname 00:19:01.214 14:59:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:01.214 14:59:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3793096 00:19:01.214 14:59:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:01.214 14:59:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:01.214 14:59:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3793096' 00:19:01.214 killing process with pid 3793096 00:19:01.214 14:59:45 -- common/autotest_common.sh@955 -- # kill 3793096 00:19:01.214 Received shutdown signal, test time was about 10.000000 seconds 00:19:01.214 00:19:01.214 Latency(us) 00:19:01.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.214 =================================================================================================================== 00:19:01.214 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:01.214 [2024-04-26 14:59:45.613918] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:01.214 14:59:45 -- common/autotest_common.sh@960 -- # wait 3793096 00:19:01.214 14:59:45 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tcl6peEyfL 00:19:01.214 14:59:45 -- common/autotest_common.sh@638 -- # local es=0 00:19:01.214 14:59:45 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tcl6peEyfL 00:19:01.214 14:59:45 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:01.214 14:59:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:01.214 14:59:45 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:01.214 14:59:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:01.214 14:59:45 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tcl6peEyfL 00:19:01.214 14:59:45 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:01.214 14:59:45 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:01.214 14:59:45 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:01.214 14:59:45 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tcl6peEyfL' 00:19:01.214 14:59:45 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:01.214 14:59:45 -- target/tls.sh@28 -- # bdevperf_pid=3794413 00:19:01.214 14:59:45 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:01.214 14:59:45 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:01.214 14:59:45 -- target/tls.sh@31 -- # waitforlisten 3794413 /var/tmp/bdevperf.sock 00:19:01.214 14:59:45 -- common/autotest_common.sh@817 -- # '[' -z 3794413 ']' 00:19:01.214 14:59:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.214 14:59:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:01.214 14:59:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.214 14:59:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:01.214 14:59:45 -- common/autotest_common.sh@10 -- # set +x 00:19:01.214 [2024-04-26 14:59:45.888775] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:01.214 [2024-04-26 14:59:45.888861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3794413 ] 00:19:01.214 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.214 [2024-04-26 14:59:45.920613] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:01.214 [2024-04-26 14:59:45.948564] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.214 [2024-04-26 14:59:46.031536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.214 14:59:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:01.214 14:59:46 -- common/autotest_common.sh@850 -- # return 0 00:19:01.214 14:59:46 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tcl6peEyfL 00:19:01.214 [2024-04-26 14:59:46.389688] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:01.214 [2024-04-26 14:59:46.389815] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:01.214 [2024-04-26 14:59:46.397642] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:01.214 [2024-04-26 14:59:46.398630] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3bb00 (107): Transport endpoint is not connected 00:19:01.214 [2024-04-26 14:59:46.399621] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3bb00 (9): Bad file descriptor 00:19:01.214 [2024-04-26 14:59:46.400620] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:01.214 [2024-04-26 14:59:46.400640] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:01.214 [2024-04-26 14:59:46.400652] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:01.214 request: 00:19:01.214 { 00:19:01.214 "name": "TLSTEST", 00:19:01.214 "trtype": "tcp", 00:19:01.214 "traddr": "10.0.0.2", 00:19:01.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:01.214 "adrfam": "ipv4", 00:19:01.214 "trsvcid": "4420", 00:19:01.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.214 "psk": "/tmp/tmp.tcl6peEyfL", 00:19:01.214 "method": "bdev_nvme_attach_controller", 00:19:01.214 "req_id": 1 00:19:01.214 } 00:19:01.214 Got JSON-RPC error response 00:19:01.214 response: 00:19:01.214 { 00:19:01.214 "code": -32602, 00:19:01.214 "message": "Invalid parameters" 00:19:01.214 } 00:19:01.214 14:59:46 -- target/tls.sh@36 -- # killprocess 3794413 00:19:01.214 14:59:46 -- common/autotest_common.sh@936 -- # '[' -z 3794413 ']' 00:19:01.214 14:59:46 -- common/autotest_common.sh@940 -- # kill -0 3794413 00:19:01.214 14:59:46 -- common/autotest_common.sh@941 -- # uname 00:19:01.214 14:59:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:01.214 14:59:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3794413 00:19:01.214 14:59:46 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:01.214 14:59:46 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:01.214 14:59:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3794413' 00:19:01.214 killing process with pid 3794413 00:19:01.214 14:59:46 -- common/autotest_common.sh@955 -- # kill 3794413 00:19:01.214 Received shutdown signal, test time was about 10.000000 seconds 00:19:01.214 00:19:01.214 Latency(us) 00:19:01.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.214 =================================================================================================================== 00:19:01.214 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:01.214 [2024-04-26 14:59:46.448321] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:01.214 14:59:46 -- common/autotest_common.sh@960 -- # wait 3794413 00:19:01.214 14:59:46 -- target/tls.sh@37 -- # return 1 00:19:01.214 14:59:46 -- common/autotest_common.sh@641 -- # es=1 00:19:01.214 14:59:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:01.214 14:59:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:01.214 14:59:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:01.214 14:59:46 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FT9gS7BfMH 00:19:01.214 14:59:46 -- common/autotest_common.sh@638 -- # local es=0 00:19:01.214 14:59:46 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FT9gS7BfMH 00:19:01.214 14:59:46 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:01.214 14:59:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:01.214 14:59:46 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:01.214 14:59:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:01.214 14:59:46 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FT9gS7BfMH 00:19:01.214 14:59:46 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:01.214 14:59:46 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:01.214 14:59:46 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:01.214 14:59:46 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FT9gS7BfMH' 00:19:01.215 14:59:46 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:01.215 14:59:46 -- target/tls.sh@28 -- # bdevperf_pid=3794518 00:19:01.215 14:59:46 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:01.215 14:59:46 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:01.215 14:59:46 -- target/tls.sh@31 -- # waitforlisten 3794518 /var/tmp/bdevperf.sock 00:19:01.215 14:59:46 -- common/autotest_common.sh@817 -- # '[' -z 3794518 ']' 00:19:01.215 14:59:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.215 14:59:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:01.215 14:59:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.215 14:59:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:01.215 14:59:46 -- common/autotest_common.sh@10 -- # set +x 00:19:01.215 [2024-04-26 14:59:46.704364] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:01.215 [2024-04-26 14:59:46.704454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3794518 ] 00:19:01.215 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.215 [2024-04-26 14:59:46.735354] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:01.215 [2024-04-26 14:59:46.762381] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.215 [2024-04-26 14:59:46.843229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.215 14:59:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:01.215 14:59:46 -- common/autotest_common.sh@850 -- # return 0 00:19:01.215 14:59:46 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.FT9gS7BfMH 00:19:01.473 [2024-04-26 14:59:47.208970] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:01.473 [2024-04-26 14:59:47.209147] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:01.731 [2024-04-26 14:59:47.219704] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:01.731 [2024-04-26 14:59:47.219733] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:01.731 [2024-04-26 14:59:47.219771] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:01.731 [2024-04-26 14:59:47.220434] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf3b00 (107): Transport endpoint is not connected 00:19:01.731 [2024-04-26 14:59:47.221426] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf3b00 (9): Bad file descriptor 00:19:01.731 [2024-04-26 14:59:47.222425] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:01.731 [2024-04-26 14:59:47.222444] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:01.731 [2024-04-26 14:59:47.222456] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:01.731 request: 00:19:01.731 { 00:19:01.731 "name": "TLSTEST", 00:19:01.731 "trtype": "tcp", 00:19:01.731 "traddr": "10.0.0.2", 00:19:01.731 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:01.731 "adrfam": "ipv4", 00:19:01.731 "trsvcid": "4420", 00:19:01.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.731 "psk": "/tmp/tmp.FT9gS7BfMH", 00:19:01.731 "method": "bdev_nvme_attach_controller", 00:19:01.731 "req_id": 1 00:19:01.731 } 00:19:01.731 Got JSON-RPC error response 00:19:01.731 response: 00:19:01.731 { 00:19:01.731 "code": -32602, 00:19:01.731 "message": "Invalid parameters" 00:19:01.731 } 00:19:01.731 14:59:47 -- target/tls.sh@36 -- # killprocess 3794518 00:19:01.731 14:59:47 -- common/autotest_common.sh@936 -- # '[' -z 3794518 ']' 00:19:01.731 14:59:47 -- common/autotest_common.sh@940 -- # kill -0 3794518 00:19:01.731 14:59:47 -- common/autotest_common.sh@941 -- # uname 00:19:01.731 14:59:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:01.731 14:59:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3794518 00:19:01.731 14:59:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:01.731 14:59:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:01.731 14:59:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3794518' 00:19:01.731 killing process with pid 3794518 00:19:01.731 14:59:47 -- common/autotest_common.sh@955 -- # kill 3794518 00:19:01.731 Received shutdown signal, test time was about 10.000000 seconds 00:19:01.731 00:19:01.731 Latency(us) 00:19:01.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.731 =================================================================================================================== 00:19:01.731 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:01.731 [2024-04-26 14:59:47.264011] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:01.731 14:59:47 -- common/autotest_common.sh@960 -- # wait 3794518 00:19:01.731 14:59:47 -- target/tls.sh@37 -- # return 1 00:19:01.731 14:59:47 -- common/autotest_common.sh@641 -- # es=1 00:19:01.731 14:59:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:01.731 14:59:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:01.731 14:59:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:01.731 14:59:47 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FT9gS7BfMH 00:19:01.731 14:59:47 -- common/autotest_common.sh@638 -- # local es=0 00:19:01.731 14:59:47 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FT9gS7BfMH 00:19:01.731 14:59:47 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:01.731 14:59:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:01.731 14:59:47 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:01.731 14:59:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:01.731 14:59:47 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FT9gS7BfMH 00:19:01.731 14:59:47 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:01.731 14:59:47 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:01.731 14:59:47 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:01.731 14:59:47 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FT9gS7BfMH' 00:19:01.731 14:59:47 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:01.731 14:59:47 -- target/tls.sh@28 -- # bdevperf_pid=3794571 00:19:01.731 14:59:47 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:01.731 14:59:47 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:01.731 14:59:47 -- target/tls.sh@31 -- # waitforlisten 3794571 /var/tmp/bdevperf.sock 00:19:01.731 14:59:47 -- common/autotest_common.sh@817 -- # '[' -z 3794571 ']' 00:19:01.731 14:59:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.731 14:59:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:01.731 14:59:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.731 14:59:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:01.731 14:59:47 -- common/autotest_common.sh@10 -- # set +x 00:19:01.990 [2024-04-26 14:59:47.501035] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:01.990 [2024-04-26 14:59:47.501127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3794571 ] 00:19:01.990 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.990 [2024-04-26 14:59:47.535061] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:01.990 [2024-04-26 14:59:47.561851] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.990 [2024-04-26 14:59:47.643875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.248 14:59:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:02.248 14:59:47 -- common/autotest_common.sh@850 -- # return 0 00:19:02.248 14:59:47 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FT9gS7BfMH 00:19:02.248 [2024-04-26 14:59:47.979305] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.248 [2024-04-26 14:59:47.979459] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:02.507 [2024-04-26 14:59:47.991814] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:02.507 [2024-04-26 14:59:47.991845] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:02.507 [2024-04-26 14:59:47.991894] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:02.507 [2024-04-26 14:59:47.992507] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1799b00 (107): Transport endpoint is not connected 00:19:02.507 [2024-04-26 14:59:47.993497] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1799b00 (9): Bad file descriptor 00:19:02.507 [2024-04-26 14:59:47.994496] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:02.507 [2024-04-26 14:59:47.994516] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:02.507 [2024-04-26 14:59:47.994528] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:02.507 request: 00:19:02.507 { 00:19:02.507 "name": "TLSTEST", 00:19:02.507 "trtype": "tcp", 00:19:02.507 "traddr": "10.0.0.2", 00:19:02.507 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.507 "adrfam": "ipv4", 00:19:02.507 "trsvcid": "4420", 00:19:02.507 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:02.507 "psk": "/tmp/tmp.FT9gS7BfMH", 00:19:02.507 "method": "bdev_nvme_attach_controller", 00:19:02.507 "req_id": 1 00:19:02.507 } 00:19:02.507 Got JSON-RPC error response 00:19:02.507 response: 00:19:02.507 { 00:19:02.507 "code": -32602, 00:19:02.507 "message": "Invalid parameters" 00:19:02.507 } 00:19:02.507 14:59:48 -- target/tls.sh@36 -- # killprocess 3794571 00:19:02.507 14:59:48 -- common/autotest_common.sh@936 -- # '[' -z 3794571 ']' 00:19:02.507 14:59:48 -- common/autotest_common.sh@940 -- # kill -0 3794571 00:19:02.507 14:59:48 -- common/autotest_common.sh@941 -- # uname 00:19:02.507 14:59:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:02.507 14:59:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3794571 00:19:02.507 14:59:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:02.507 14:59:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:02.507 14:59:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3794571' 00:19:02.507 killing process with pid 3794571 00:19:02.507 14:59:48 -- common/autotest_common.sh@955 -- # kill 3794571 00:19:02.507 Received shutdown signal, test time was about 10.000000 seconds 00:19:02.507 00:19:02.507 Latency(us) 00:19:02.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.507 =================================================================================================================== 00:19:02.507 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:02.507 [2024-04-26 14:59:48.036658] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:02.507 14:59:48 -- common/autotest_common.sh@960 -- # wait 3794571 00:19:02.766 14:59:48 -- target/tls.sh@37 -- # return 1 00:19:02.766 14:59:48 -- common/autotest_common.sh@641 -- # es=1 00:19:02.766 14:59:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:02.766 14:59:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:02.766 14:59:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:02.766 14:59:48 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:02.766 14:59:48 -- common/autotest_common.sh@638 -- # local es=0 00:19:02.766 14:59:48 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:02.766 14:59:48 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:02.766 14:59:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:02.766 14:59:48 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:02.766 14:59:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:02.766 14:59:48 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:02.766 14:59:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:02.766 14:59:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:02.766 14:59:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:02.766 14:59:48 -- target/tls.sh@23 -- # psk= 00:19:02.766 14:59:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:02.766 14:59:48 -- target/tls.sh@28 -- # bdevperf_pid=3794705 00:19:02.766 14:59:48 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:02.766 14:59:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:02.766 14:59:48 -- target/tls.sh@31 -- # waitforlisten 3794705 /var/tmp/bdevperf.sock 00:19:02.766 14:59:48 -- common/autotest_common.sh@817 -- # '[' -z 3794705 ']' 00:19:02.766 14:59:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:02.766 14:59:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:02.766 14:59:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:02.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:02.766 14:59:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:02.766 14:59:48 -- common/autotest_common.sh@10 -- # set +x 00:19:02.766 [2024-04-26 14:59:48.300254] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:02.766 [2024-04-26 14:59:48.300337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3794705 ] 00:19:02.766 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.766 [2024-04-26 14:59:48.331342] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:02.766 [2024-04-26 14:59:48.358149] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.766 [2024-04-26 14:59:48.437738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:03.024 14:59:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:03.024 14:59:48 -- common/autotest_common.sh@850 -- # return 0 00:19:03.024 14:59:48 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:03.282 [2024-04-26 14:59:48.773795] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:03.282 [2024-04-26 14:59:48.775036] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ff1f0 (9): Bad file descriptor 00:19:03.282 [2024-04-26 14:59:48.776045] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:03.282 [2024-04-26 14:59:48.776067] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:03.282 [2024-04-26 14:59:48.776089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:03.282 request: 00:19:03.282 { 00:19:03.282 "name": "TLSTEST", 00:19:03.282 "trtype": "tcp", 00:19:03.282 "traddr": "10.0.0.2", 00:19:03.282 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:03.282 "adrfam": "ipv4", 00:19:03.282 "trsvcid": "4420", 00:19:03.282 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.282 "method": "bdev_nvme_attach_controller", 00:19:03.282 "req_id": 1 00:19:03.282 } 00:19:03.282 Got JSON-RPC error response 00:19:03.282 response: 00:19:03.282 { 00:19:03.282 "code": -32602, 00:19:03.282 "message": "Invalid parameters" 00:19:03.282 } 00:19:03.282 14:59:48 -- target/tls.sh@36 -- # killprocess 3794705 00:19:03.282 14:59:48 -- common/autotest_common.sh@936 -- # '[' -z 3794705 ']' 00:19:03.282 14:59:48 -- common/autotest_common.sh@940 -- # kill -0 3794705 00:19:03.282 14:59:48 -- common/autotest_common.sh@941 -- # uname 00:19:03.282 14:59:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:03.282 14:59:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3794705 00:19:03.282 14:59:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:03.282 14:59:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:03.282 14:59:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3794705' 00:19:03.282 killing process with pid 3794705 00:19:03.282 14:59:48 -- common/autotest_common.sh@955 -- # kill 3794705 00:19:03.282 Received shutdown signal, test time was about 10.000000 seconds 00:19:03.282 00:19:03.282 Latency(us) 00:19:03.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.282 =================================================================================================================== 00:19:03.282 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:03.282 14:59:48 -- common/autotest_common.sh@960 -- # wait 3794705 00:19:03.282 14:59:49 -- target/tls.sh@37 -- # return 1 00:19:03.282 14:59:49 -- common/autotest_common.sh@641 -- # es=1 00:19:03.590 14:59:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:03.590 14:59:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:03.590 14:59:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:03.590 14:59:49 -- target/tls.sh@158 -- # killprocess 3791320 00:19:03.590 14:59:49 -- common/autotest_common.sh@936 -- # '[' -z 3791320 ']' 00:19:03.590 14:59:49 -- common/autotest_common.sh@940 -- # kill -0 3791320 00:19:03.590 14:59:49 -- common/autotest_common.sh@941 -- # uname 00:19:03.590 14:59:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:03.590 14:59:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3791320 00:19:03.590 14:59:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:03.590 14:59:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:03.590 14:59:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3791320' 00:19:03.590 killing process with pid 3791320 00:19:03.590 14:59:49 -- common/autotest_common.sh@955 -- # kill 3791320 00:19:03.590 [2024-04-26 14:59:49.050496] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:03.590 14:59:49 -- common/autotest_common.sh@960 -- # wait 3791320 00:19:03.590 14:59:49 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:03.590 14:59:49 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:03.590 14:59:49 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:03.590 14:59:49 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:19:03.590 14:59:49 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:03.590 14:59:49 -- nvmf/common.sh@693 -- # digest=2 00:19:03.590 14:59:49 -- nvmf/common.sh@694 -- # python - 00:19:03.849 14:59:49 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:03.849 14:59:49 -- target/tls.sh@160 -- # mktemp 00:19:03.849 14:59:49 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.0KKFjE8252 00:19:03.849 14:59:49 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:03.849 14:59:49 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.0KKFjE8252 00:19:03.849 14:59:49 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:03.849 14:59:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:03.849 14:59:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:03.849 14:59:49 -- common/autotest_common.sh@10 -- # set +x 00:19:03.849 14:59:49 -- nvmf/common.sh@470 -- # nvmfpid=3794858 00:19:03.849 14:59:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:03.849 14:59:49 -- nvmf/common.sh@471 -- # waitforlisten 3794858 00:19:03.849 14:59:49 -- common/autotest_common.sh@817 -- # '[' -z 3794858 ']' 00:19:03.849 14:59:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.849 14:59:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:03.849 14:59:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.850 14:59:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:03.850 14:59:49 -- common/autotest_common.sh@10 -- # set +x 00:19:03.850 [2024-04-26 14:59:49.377909] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:03.850 [2024-04-26 14:59:49.377992] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.850 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.850 [2024-04-26 14:59:49.415850] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:03.850 [2024-04-26 14:59:49.441302] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.850 [2024-04-26 14:59:49.524944] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.850 [2024-04-26 14:59:49.525016] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.850 [2024-04-26 14:59:49.525055] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.850 [2024-04-26 14:59:49.525067] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.850 [2024-04-26 14:59:49.525078] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.850 [2024-04-26 14:59:49.525107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.108 14:59:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:04.108 14:59:49 -- common/autotest_common.sh@850 -- # return 0 00:19:04.108 14:59:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:04.108 14:59:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:04.108 14:59:49 -- common/autotest_common.sh@10 -- # set +x 00:19:04.108 14:59:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.108 14:59:49 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.0KKFjE8252 00:19:04.108 14:59:49 -- target/tls.sh@49 -- # local key=/tmp/tmp.0KKFjE8252 00:19:04.108 14:59:49 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:04.366 [2024-04-26 14:59:49.895305] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.366 14:59:49 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:04.627 14:59:50 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:04.884 [2024-04-26 14:59:50.428750] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:04.884 [2024-04-26 14:59:50.429049] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.884 14:59:50 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:05.141 malloc0 00:19:05.141 14:59:50 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:05.398 14:59:50 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0KKFjE8252 00:19:05.656 [2024-04-26 14:59:51.192970] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:05.656 14:59:51 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0KKFjE8252 00:19:05.656 14:59:51 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:05.656 14:59:51 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:05.656 14:59:51 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:05.656 14:59:51 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0KKFjE8252' 00:19:05.656 14:59:51 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:05.656 14:59:51 -- target/tls.sh@28 -- # bdevperf_pid=3795143 00:19:05.656 14:59:51 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:05.656 14:59:51 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:05.656 14:59:51 -- target/tls.sh@31 -- # waitforlisten 3795143 /var/tmp/bdevperf.sock 00:19:05.656 14:59:51 -- common/autotest_common.sh@817 -- # '[' -z 3795143 ']' 00:19:05.656 14:59:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.656 14:59:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:05.656 14:59:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.656 14:59:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:05.656 14:59:51 -- common/autotest_common.sh@10 -- # set +x 00:19:05.656 [2024-04-26 14:59:51.256878] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:05.656 [2024-04-26 14:59:51.256958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3795143 ] 00:19:05.656 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.656 [2024-04-26 14:59:51.287649] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:05.656 [2024-04-26 14:59:51.314053] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.656 [2024-04-26 14:59:51.395516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.913 14:59:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:05.913 14:59:51 -- common/autotest_common.sh@850 -- # return 0 00:19:05.913 14:59:51 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0KKFjE8252 00:19:06.171 [2024-04-26 14:59:51.766355] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:06.171 [2024-04-26 14:59:51.766476] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:06.171 TLSTESTn1 00:19:06.171 14:59:51 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:06.428 Running I/O for 10 seconds... 00:19:16.396 00:19:16.396 Latency(us) 00:19:16.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.397 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:16.397 Verification LBA range: start 0x0 length 0x2000 00:19:16.397 TLSTESTn1 : 10.02 3637.57 14.21 0.00 0.00 35127.35 10097.40 40972.14 00:19:16.397 =================================================================================================================== 00:19:16.397 Total : 3637.57 14.21 0.00 0.00 35127.35 10097.40 40972.14 00:19:16.397 0 00:19:16.397 15:00:02 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:16.397 15:00:02 -- target/tls.sh@45 -- # killprocess 3795143 00:19:16.397 15:00:02 -- common/autotest_common.sh@936 -- # '[' -z 3795143 ']' 00:19:16.397 15:00:02 -- common/autotest_common.sh@940 -- # kill -0 3795143 00:19:16.397 15:00:02 -- common/autotest_common.sh@941 -- # uname 00:19:16.397 15:00:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:16.397 15:00:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3795143 00:19:16.397 15:00:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:16.397 15:00:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:16.397 15:00:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3795143' 00:19:16.397 killing process with pid 3795143 00:19:16.397 15:00:02 -- common/autotest_common.sh@955 -- # kill 3795143 00:19:16.397 Received shutdown signal, test time was about 10.000000 seconds 00:19:16.397 00:19:16.397 Latency(us) 00:19:16.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.397 =================================================================================================================== 00:19:16.397 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:16.397 [2024-04-26 15:00:02.057418] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:16.397 15:00:02 -- common/autotest_common.sh@960 -- # wait 3795143 00:19:16.654 15:00:02 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.0KKFjE8252 00:19:16.654 15:00:02 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0KKFjE8252 00:19:16.654 15:00:02 -- common/autotest_common.sh@638 -- # local es=0 00:19:16.654 15:00:02 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0KKFjE8252 00:19:16.654 15:00:02 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:16.654 15:00:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:16.654 15:00:02 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:16.654 15:00:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:16.654 15:00:02 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0KKFjE8252 00:19:16.654 15:00:02 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:16.654 15:00:02 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:16.654 15:00:02 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:16.654 15:00:02 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0KKFjE8252' 00:19:16.654 15:00:02 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:16.654 15:00:02 -- target/tls.sh@28 -- # bdevperf_pid=3796451 00:19:16.654 15:00:02 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:16.654 15:00:02 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:16.654 15:00:02 -- target/tls.sh@31 -- # waitforlisten 3796451 /var/tmp/bdevperf.sock 00:19:16.654 15:00:02 -- common/autotest_common.sh@817 -- # '[' -z 3796451 ']' 00:19:16.654 15:00:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.654 15:00:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:16.654 15:00:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.654 15:00:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:16.654 15:00:02 -- common/autotest_common.sh@10 -- # set +x 00:19:16.654 [2024-04-26 15:00:02.317505] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:16.654 [2024-04-26 15:00:02.317592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3796451 ] 00:19:16.654 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.654 [2024-04-26 15:00:02.349980] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:16.654 [2024-04-26 15:00:02.378576] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.911 [2024-04-26 15:00:02.463654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.911 15:00:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:16.912 15:00:02 -- common/autotest_common.sh@850 -- # return 0 00:19:16.912 15:00:02 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0KKFjE8252 00:19:17.172 [2024-04-26 15:00:02.793106] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:17.172 [2024-04-26 15:00:02.793198] bdev_nvme.c:6071:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:17.172 [2024-04-26 15:00:02.793214] bdev_nvme.c:6180:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.0KKFjE8252 00:19:17.172 request: 00:19:17.172 { 00:19:17.172 "name": "TLSTEST", 00:19:17.172 "trtype": "tcp", 00:19:17.172 "traddr": "10.0.0.2", 00:19:17.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.172 "adrfam": "ipv4", 00:19:17.172 "trsvcid": "4420", 00:19:17.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.172 "psk": "/tmp/tmp.0KKFjE8252", 00:19:17.172 "method": "bdev_nvme_attach_controller", 00:19:17.172 "req_id": 1 00:19:17.172 } 00:19:17.172 Got JSON-RPC error response 00:19:17.172 response: 00:19:17.172 { 00:19:17.172 "code": -1, 00:19:17.172 "message": "Operation not permitted" 00:19:17.172 } 00:19:17.172 15:00:02 -- target/tls.sh@36 -- # killprocess 3796451 00:19:17.172 15:00:02 -- common/autotest_common.sh@936 -- # '[' -z 3796451 ']' 00:19:17.172 15:00:02 -- common/autotest_common.sh@940 -- # kill -0 3796451 00:19:17.172 15:00:02 -- common/autotest_common.sh@941 -- # uname 00:19:17.172 15:00:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:17.172 15:00:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3796451 00:19:17.172 15:00:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:17.172 15:00:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:17.172 15:00:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3796451' 00:19:17.172 killing process with pid 3796451 00:19:17.172 15:00:02 -- common/autotest_common.sh@955 -- # kill 3796451 00:19:17.172 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.172 00:19:17.172 Latency(us) 00:19:17.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.172 =================================================================================================================== 00:19:17.172 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:17.172 15:00:02 -- common/autotest_common.sh@960 -- # wait 3796451 00:19:17.432 15:00:03 -- target/tls.sh@37 -- # return 1 00:19:17.432 15:00:03 -- common/autotest_common.sh@641 -- # es=1 00:19:17.432 15:00:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:17.432 15:00:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:17.432 15:00:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:17.432 15:00:03 -- target/tls.sh@174 -- # killprocess 3794858 00:19:17.432 15:00:03 -- common/autotest_common.sh@936 -- # '[' -z 3794858 ']' 00:19:17.432 15:00:03 -- common/autotest_common.sh@940 -- # kill -0 3794858 00:19:17.432 15:00:03 -- common/autotest_common.sh@941 -- # uname 00:19:17.432 15:00:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:17.432 15:00:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3794858 00:19:17.432 15:00:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:17.432 15:00:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:17.433 15:00:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3794858' 00:19:17.433 killing process with pid 3794858 00:19:17.433 15:00:03 -- common/autotest_common.sh@955 -- # kill 3794858 00:19:17.433 [2024-04-26 15:00:03.093880] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:17.433 15:00:03 -- common/autotest_common.sh@960 -- # wait 3794858 00:19:17.690 15:00:03 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:17.690 15:00:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:17.690 15:00:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:17.690 15:00:03 -- common/autotest_common.sh@10 -- # set +x 00:19:17.690 15:00:03 -- nvmf/common.sh@470 -- # nvmfpid=3796595 00:19:17.690 15:00:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:17.690 15:00:03 -- nvmf/common.sh@471 -- # waitforlisten 3796595 00:19:17.690 15:00:03 -- common/autotest_common.sh@817 -- # '[' -z 3796595 ']' 00:19:17.690 15:00:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.690 15:00:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:17.690 15:00:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.690 15:00:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:17.690 15:00:03 -- common/autotest_common.sh@10 -- # set +x 00:19:17.690 [2024-04-26 15:00:03.385150] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:17.690 [2024-04-26 15:00:03.385239] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.690 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.690 [2024-04-26 15:00:03.424391] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:17.947 [2024-04-26 15:00:03.457128] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.947 [2024-04-26 15:00:03.549566] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.947 [2024-04-26 15:00:03.549629] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.947 [2024-04-26 15:00:03.549656] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.947 [2024-04-26 15:00:03.549667] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.947 [2024-04-26 15:00:03.549678] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.947 [2024-04-26 15:00:03.549705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.947 15:00:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:17.947 15:00:03 -- common/autotest_common.sh@850 -- # return 0 00:19:17.947 15:00:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:17.947 15:00:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:17.947 15:00:03 -- common/autotest_common.sh@10 -- # set +x 00:19:18.204 15:00:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.204 15:00:03 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.0KKFjE8252 00:19:18.204 15:00:03 -- common/autotest_common.sh@638 -- # local es=0 00:19:18.204 15:00:03 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.0KKFjE8252 00:19:18.204 15:00:03 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:19:18.204 15:00:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:18.204 15:00:03 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:19:18.204 15:00:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:18.204 15:00:03 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.0KKFjE8252 00:19:18.204 15:00:03 -- target/tls.sh@49 -- # local key=/tmp/tmp.0KKFjE8252 00:19:18.204 15:00:03 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:18.461 [2024-04-26 15:00:03.963752] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.461 15:00:03 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:18.718 15:00:04 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:18.975 [2024-04-26 15:00:04.525247] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:18.975 [2024-04-26 15:00:04.525527] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.975 15:00:04 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:19.233 malloc0 00:19:19.233 15:00:04 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:19.490 15:00:05 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0KKFjE8252 00:19:19.799 [2024-04-26 15:00:05.266606] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:19.799 [2024-04-26 15:00:05.266659] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:19.799 [2024-04-26 15:00:05.266692] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:19:19.799 request: 00:19:19.799 { 00:19:19.799 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.799 "host": "nqn.2016-06.io.spdk:host1", 00:19:19.799 "psk": "/tmp/tmp.0KKFjE8252", 00:19:19.799 "method": "nvmf_subsystem_add_host", 00:19:19.799 "req_id": 1 00:19:19.799 } 00:19:19.799 Got JSON-RPC error response 00:19:19.799 response: 00:19:19.799 { 00:19:19.799 "code": -32603, 00:19:19.799 "message": "Internal error" 00:19:19.799 } 00:19:19.799 15:00:05 -- common/autotest_common.sh@641 -- # es=1 00:19:19.799 15:00:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:19.799 15:00:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:19.799 15:00:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:19.799 15:00:05 -- target/tls.sh@180 -- # killprocess 3796595 00:19:19.799 15:00:05 -- common/autotest_common.sh@936 -- # '[' -z 3796595 ']' 00:19:19.799 15:00:05 -- common/autotest_common.sh@940 -- # kill -0 3796595 00:19:19.799 15:00:05 -- common/autotest_common.sh@941 -- # uname 00:19:19.799 15:00:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:19.799 15:00:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3796595 00:19:19.799 15:00:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:19.799 15:00:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:19.799 15:00:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3796595' 00:19:19.799 killing process with pid 3796595 00:19:19.799 15:00:05 -- common/autotest_common.sh@955 -- # kill 3796595 00:19:19.799 15:00:05 -- common/autotest_common.sh@960 -- # wait 3796595 00:19:20.076 15:00:05 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.0KKFjE8252 00:19:20.076 15:00:05 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:20.076 15:00:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:20.076 15:00:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:20.076 15:00:05 -- common/autotest_common.sh@10 -- # set +x 00:19:20.076 15:00:05 -- nvmf/common.sh@470 -- # nvmfpid=3796891 00:19:20.076 15:00:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:20.076 15:00:05 -- nvmf/common.sh@471 -- # waitforlisten 3796891 00:19:20.076 15:00:05 -- common/autotest_common.sh@817 -- # '[' -z 3796891 ']' 00:19:20.076 15:00:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.076 15:00:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:20.076 15:00:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.076 15:00:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:20.076 15:00:05 -- common/autotest_common.sh@10 -- # set +x 00:19:20.076 [2024-04-26 15:00:05.626073] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:20.076 [2024-04-26 15:00:05.626160] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.076 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.076 [2024-04-26 15:00:05.664150] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:20.076 [2024-04-26 15:00:05.696836] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.076 [2024-04-26 15:00:05.790608] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.076 [2024-04-26 15:00:05.790672] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.076 [2024-04-26 15:00:05.790697] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.076 [2024-04-26 15:00:05.790709] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.077 [2024-04-26 15:00:05.790721] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.077 [2024-04-26 15:00:05.790757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.334 15:00:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:20.334 15:00:05 -- common/autotest_common.sh@850 -- # return 0 00:19:20.334 15:00:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:20.334 15:00:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:20.334 15:00:05 -- common/autotest_common.sh@10 -- # set +x 00:19:20.334 15:00:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.334 15:00:05 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.0KKFjE8252 00:19:20.334 15:00:05 -- target/tls.sh@49 -- # local key=/tmp/tmp.0KKFjE8252 00:19:20.334 15:00:05 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:20.591 [2024-04-26 15:00:06.168108] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.591 15:00:06 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:20.848 15:00:06 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:21.104 [2024-04-26 15:00:06.665448] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:21.104 [2024-04-26 15:00:06.665709] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.104 15:00:06 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:21.361 malloc0 00:19:21.361 15:00:06 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:21.619 15:00:07 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0KKFjE8252 00:19:21.879 [2024-04-26 15:00:07.403950] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:21.879 15:00:07 -- target/tls.sh@188 -- # bdevperf_pid=3797176 00:19:21.879 15:00:07 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:21.879 15:00:07 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:21.879 15:00:07 -- target/tls.sh@191 -- # waitforlisten 3797176 /var/tmp/bdevperf.sock 00:19:21.879 15:00:07 -- common/autotest_common.sh@817 -- # '[' -z 3797176 ']' 00:19:21.879 15:00:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.879 15:00:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:21.879 15:00:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.879 15:00:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:21.879 15:00:07 -- common/autotest_common.sh@10 -- # set +x 00:19:21.879 [2024-04-26 15:00:07.465449] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:21.879 [2024-04-26 15:00:07.465547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3797176 ] 00:19:21.879 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.879 [2024-04-26 15:00:07.498042] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:21.879 [2024-04-26 15:00:07.525323] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.879 [2024-04-26 15:00:07.607556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.138 15:00:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:22.138 15:00:07 -- common/autotest_common.sh@850 -- # return 0 00:19:22.138 15:00:07 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0KKFjE8252 00:19:22.395 [2024-04-26 15:00:07.943782] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:22.395 [2024-04-26 15:00:07.943940] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:22.395 TLSTESTn1 00:19:22.395 15:00:08 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:22.652 15:00:08 -- target/tls.sh@196 -- # tgtconf='{ 00:19:22.652 "subsystems": [ 00:19:22.652 { 00:19:22.652 "subsystem": "keyring", 00:19:22.652 "config": [] 00:19:22.652 }, 00:19:22.652 { 00:19:22.652 "subsystem": "iobuf", 00:19:22.652 "config": [ 00:19:22.652 { 00:19:22.652 "method": "iobuf_set_options", 00:19:22.652 "params": { 00:19:22.652 "small_pool_count": 8192, 00:19:22.652 "large_pool_count": 1024, 00:19:22.652 "small_bufsize": 8192, 00:19:22.652 "large_bufsize": 135168 00:19:22.652 } 00:19:22.652 } 00:19:22.652 ] 00:19:22.652 }, 00:19:22.652 { 00:19:22.652 "subsystem": "sock", 00:19:22.652 "config": [ 00:19:22.652 { 00:19:22.652 "method": "sock_impl_set_options", 00:19:22.652 "params": { 00:19:22.652 "impl_name": "posix", 00:19:22.652 "recv_buf_size": 2097152, 00:19:22.652 "send_buf_size": 2097152, 00:19:22.652 "enable_recv_pipe": true, 00:19:22.652 "enable_quickack": false, 00:19:22.652 "enable_placement_id": 0, 00:19:22.652 "enable_zerocopy_send_server": true, 00:19:22.652 "enable_zerocopy_send_client": false, 00:19:22.652 "zerocopy_threshold": 0, 00:19:22.652 "tls_version": 0, 00:19:22.652 "enable_ktls": false 00:19:22.652 } 00:19:22.652 }, 00:19:22.652 { 00:19:22.652 "method": "sock_impl_set_options", 00:19:22.652 "params": { 00:19:22.653 "impl_name": "ssl", 00:19:22.653 "recv_buf_size": 4096, 00:19:22.653 "send_buf_size": 4096, 00:19:22.653 "enable_recv_pipe": true, 00:19:22.653 "enable_quickack": false, 00:19:22.653 "enable_placement_id": 0, 00:19:22.653 "enable_zerocopy_send_server": true, 00:19:22.653 "enable_zerocopy_send_client": false, 00:19:22.653 "zerocopy_threshold": 0, 00:19:22.653 "tls_version": 0, 00:19:22.653 "enable_ktls": false 00:19:22.653 } 00:19:22.653 } 00:19:22.653 ] 00:19:22.653 }, 00:19:22.653 { 00:19:22.653 "subsystem": "vmd", 00:19:22.653 "config": [] 00:19:22.653 }, 00:19:22.653 { 00:19:22.653 "subsystem": "accel", 00:19:22.653 "config": [ 00:19:22.653 { 00:19:22.653 "method": "accel_set_options", 00:19:22.653 "params": { 00:19:22.653 "small_cache_size": 128, 00:19:22.653 "large_cache_size": 16, 00:19:22.653 "task_count": 2048, 00:19:22.653 "sequence_count": 2048, 00:19:22.653 "buf_count": 2048 00:19:22.653 } 00:19:22.653 } 00:19:22.653 ] 00:19:22.653 }, 00:19:22.653 { 00:19:22.653 "subsystem": "bdev", 00:19:22.653 "config": [ 00:19:22.653 { 00:19:22.653 "method": "bdev_set_options", 00:19:22.653 "params": { 00:19:22.653 "bdev_io_pool_size": 65535, 00:19:22.653 "bdev_io_cache_size": 256, 00:19:22.653 "bdev_auto_examine": true, 00:19:22.653 "iobuf_small_cache_size": 128, 00:19:22.653 "iobuf_large_cache_size": 16 00:19:22.653 } 00:19:22.653 }, 00:19:22.653 { 00:19:22.653 "method": "bdev_raid_set_options", 00:19:22.653 "params": { 00:19:22.653 "process_window_size_kb": 1024 00:19:22.653 } 00:19:22.653 }, 00:19:22.653 { 00:19:22.653 "method": "bdev_iscsi_set_options", 00:19:22.653 "params": { 00:19:22.653 "timeout_sec": 30 00:19:22.653 } 00:19:22.653 }, 00:19:22.653 { 00:19:22.653 "method": "bdev_nvme_set_options", 00:19:22.653 "params": { 00:19:22.653 "action_on_timeout": "none", 00:19:22.653 "timeout_us": 0, 00:19:22.653 "timeout_admin_us": 0, 00:19:22.653 "keep_alive_timeout_ms": 10000, 00:19:22.653 "arbitration_burst": 0, 00:19:22.653 "low_priority_weight": 0, 00:19:22.653 "medium_priority_weight": 0, 00:19:22.653 "high_priority_weight": 0, 00:19:22.653 "nvme_adminq_poll_period_us": 10000, 00:19:22.653 "nvme_ioq_poll_period_us": 0, 00:19:22.653 "io_queue_requests": 0, 00:19:22.653 "delay_cmd_submit": true, 00:19:22.653 "transport_retry_count": 4, 00:19:22.653 "bdev_retry_count": 3, 00:19:22.653 "transport_ack_timeout": 0, 00:19:22.653 "ctrlr_loss_timeout_sec": 0, 00:19:22.653 "reconnect_delay_sec": 0, 00:19:22.653 "fast_io_fail_timeout_sec": 0, 00:19:22.653 "disable_auto_failback": false, 00:19:22.653 "generate_uuids": false, 00:19:22.653 "transport_tos": 0, 00:19:22.653 "nvme_error_stat": false, 00:19:22.653 "rdma_srq_size": 0, 00:19:22.653 "io_path_stat": false, 00:19:22.653 "allow_accel_sequence": false, 00:19:22.653 "rdma_max_cq_size": 0, 00:19:22.653 "rdma_cm_event_timeout_ms": 0, 00:19:22.653 "dhchap_digests": [ 00:19:22.653 "sha256", 00:19:22.653 "sha384", 00:19:22.653 "sha512" 00:19:22.653 ], 00:19:22.653 "dhchap_dhgroups": [ 00:19:22.653 "null", 00:19:22.653 "ffdhe2048", 00:19:22.653 "ffdhe3072", 00:19:22.653 "ffdhe4096", 00:19:22.653 "ffdhe6144", 00:19:22.653 "ffdhe8192" 00:19:22.653 ] 00:19:22.653 } 00:19:22.653 }, 00:19:22.653 { 00:19:22.653 "method": "bdev_nvme_set_hotplug", 00:19:22.653 "params": { 00:19:22.653 "period_us": 100000, 00:19:22.653 "enable": false 00:19:22.653 } 00:19:22.653 }, 00:19:22.653 { 00:19:22.653 "method": "bdev_malloc_create", 00:19:22.653 "params": { 00:19:22.653 "name": "malloc0", 00:19:22.653 "num_blocks": 8192, 00:19:22.653 "block_size": 4096, 00:19:22.653 "physical_block_size": 4096, 00:19:22.653 "uuid": "c7e3c8df-c53f-4713-b21d-dd34ca31c110", 00:19:22.653 "optimal_io_boundary": 0 00:19:22.653 } 00:19:22.653 }, 00:19:22.653 { 00:19:22.653 "method": "bdev_wait_for_examine" 00:19:22.653 } 00:19:22.653 ] 00:19:22.653 }, 00:19:22.653 { 00:19:22.653 "subsystem": "nbd", 00:19:22.653 "config": [] 00:19:22.653 }, 00:19:22.653 { 00:19:22.653 "subsystem": "scheduler", 00:19:22.653 "config": [ 00:19:22.653 { 00:19:22.653 "method": "framework_set_scheduler", 00:19:22.653 "params": { 00:19:22.653 "name": "static" 00:19:22.653 } 00:19:22.653 } 00:19:22.653 ] 00:19:22.653 }, 00:19:22.653 { 00:19:22.653 "subsystem": "nvmf", 00:19:22.653 "config": [ 00:19:22.653 { 00:19:22.653 "method": "nvmf_set_config", 00:19:22.653 "params": { 00:19:22.653 "discovery_filter": "match_any", 00:19:22.653 "admin_cmd_passthru": { 00:19:22.653 "identify_ctrlr": false 00:19:22.653 } 00:19:22.653 } 00:19:22.653 }, 00:19:22.653 { 00:19:22.653 "method": "nvmf_set_max_subsystems", 00:19:22.653 "params": { 00:19:22.653 "max_subsystems": 1024 00:19:22.653 } 00:19:22.653 }, 00:19:22.653 { 00:19:22.653 "method": "nvmf_set_crdt", 00:19:22.653 "params": { 00:19:22.653 "crdt1": 0, 00:19:22.653 "crdt2": 0, 00:19:22.653 "crdt3": 0 00:19:22.653 } 00:19:22.653 }, 00:19:22.653 { 00:19:22.653 "method": "nvmf_create_transport", 00:19:22.653 "params": { 00:19:22.653 "trtype": "TCP", 00:19:22.653 "max_queue_depth": 128, 00:19:22.653 "max_io_qpairs_per_ctrlr": 127, 00:19:22.653 "in_capsule_data_size": 4096, 00:19:22.653 "max_io_size": 131072, 00:19:22.653 "io_unit_size": 131072, 00:19:22.653 "max_aq_depth": 128, 00:19:22.653 "num_shared_buffers": 511, 00:19:22.653 "buf_cache_size": 4294967295, 00:19:22.653 "dif_insert_or_strip": false, 00:19:22.653 "zcopy": false, 00:19:22.654 "c2h_success": false, 00:19:22.654 "sock_priority": 0, 00:19:22.654 "abort_timeout_sec": 1, 00:19:22.654 "ack_timeout": 0, 00:19:22.654 "data_wr_pool_size": 0 00:19:22.654 } 00:19:22.654 }, 00:19:22.654 { 00:19:22.654 "method": "nvmf_create_subsystem", 00:19:22.654 "params": { 00:19:22.654 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.654 "allow_any_host": false, 00:19:22.654 "serial_number": "SPDK00000000000001", 00:19:22.654 "model_number": "SPDK bdev Controller", 00:19:22.654 "max_namespaces": 10, 00:19:22.654 "min_cntlid": 1, 00:19:22.654 "max_cntlid": 65519, 00:19:22.654 "ana_reporting": false 00:19:22.654 } 00:19:22.654 }, 00:19:22.654 { 00:19:22.654 "method": "nvmf_subsystem_add_host", 00:19:22.654 "params": { 00:19:22.654 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.654 "host": "nqn.2016-06.io.spdk:host1", 00:19:22.654 "psk": "/tmp/tmp.0KKFjE8252" 00:19:22.654 } 00:19:22.654 }, 00:19:22.654 { 00:19:22.654 "method": "nvmf_subsystem_add_ns", 00:19:22.654 "params": { 00:19:22.654 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.654 "namespace": { 00:19:22.654 "nsid": 1, 00:19:22.654 "bdev_name": "malloc0", 00:19:22.654 "nguid": "C7E3C8DFC53F4713B21DDD34CA31C110", 00:19:22.654 "uuid": "c7e3c8df-c53f-4713-b21d-dd34ca31c110", 00:19:22.654 "no_auto_visible": false 00:19:22.654 } 00:19:22.654 } 00:19:22.654 }, 00:19:22.654 { 00:19:22.654 "method": "nvmf_subsystem_add_listener", 00:19:22.654 "params": { 00:19:22.654 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.654 "listen_address": { 00:19:22.654 "trtype": "TCP", 00:19:22.654 "adrfam": "IPv4", 00:19:22.654 "traddr": "10.0.0.2", 00:19:22.654 "trsvcid": "4420" 00:19:22.654 }, 00:19:22.654 "secure_channel": true 00:19:22.654 } 00:19:22.654 } 00:19:22.654 ] 00:19:22.654 } 00:19:22.654 ] 00:19:22.654 }' 00:19:22.654 15:00:08 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:23.225 15:00:08 -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:23.225 "subsystems": [ 00:19:23.225 { 00:19:23.225 "subsystem": "keyring", 00:19:23.225 "config": [] 00:19:23.225 }, 00:19:23.225 { 00:19:23.225 "subsystem": "iobuf", 00:19:23.225 "config": [ 00:19:23.225 { 00:19:23.225 "method": "iobuf_set_options", 00:19:23.225 "params": { 00:19:23.225 "small_pool_count": 8192, 00:19:23.225 "large_pool_count": 1024, 00:19:23.225 "small_bufsize": 8192, 00:19:23.225 "large_bufsize": 135168 00:19:23.225 } 00:19:23.225 } 00:19:23.225 ] 00:19:23.225 }, 00:19:23.225 { 00:19:23.225 "subsystem": "sock", 00:19:23.225 "config": [ 00:19:23.225 { 00:19:23.225 "method": "sock_impl_set_options", 00:19:23.225 "params": { 00:19:23.225 "impl_name": "posix", 00:19:23.225 "recv_buf_size": 2097152, 00:19:23.225 "send_buf_size": 2097152, 00:19:23.225 "enable_recv_pipe": true, 00:19:23.225 "enable_quickack": false, 00:19:23.225 "enable_placement_id": 0, 00:19:23.225 "enable_zerocopy_send_server": true, 00:19:23.225 "enable_zerocopy_send_client": false, 00:19:23.225 "zerocopy_threshold": 0, 00:19:23.225 "tls_version": 0, 00:19:23.225 "enable_ktls": false 00:19:23.225 } 00:19:23.225 }, 00:19:23.225 { 00:19:23.225 "method": "sock_impl_set_options", 00:19:23.225 "params": { 00:19:23.225 "impl_name": "ssl", 00:19:23.225 "recv_buf_size": 4096, 00:19:23.225 "send_buf_size": 4096, 00:19:23.225 "enable_recv_pipe": true, 00:19:23.225 "enable_quickack": false, 00:19:23.225 "enable_placement_id": 0, 00:19:23.225 "enable_zerocopy_send_server": true, 00:19:23.225 "enable_zerocopy_send_client": false, 00:19:23.225 "zerocopy_threshold": 0, 00:19:23.225 "tls_version": 0, 00:19:23.225 "enable_ktls": false 00:19:23.225 } 00:19:23.225 } 00:19:23.225 ] 00:19:23.225 }, 00:19:23.225 { 00:19:23.225 "subsystem": "vmd", 00:19:23.225 "config": [] 00:19:23.225 }, 00:19:23.225 { 00:19:23.225 "subsystem": "accel", 00:19:23.225 "config": [ 00:19:23.225 { 00:19:23.225 "method": "accel_set_options", 00:19:23.225 "params": { 00:19:23.225 "small_cache_size": 128, 00:19:23.225 "large_cache_size": 16, 00:19:23.225 "task_count": 2048, 00:19:23.225 "sequence_count": 2048, 00:19:23.225 "buf_count": 2048 00:19:23.225 } 00:19:23.225 } 00:19:23.225 ] 00:19:23.225 }, 00:19:23.225 { 00:19:23.225 "subsystem": "bdev", 00:19:23.225 "config": [ 00:19:23.225 { 00:19:23.225 "method": "bdev_set_options", 00:19:23.225 "params": { 00:19:23.225 "bdev_io_pool_size": 65535, 00:19:23.225 "bdev_io_cache_size": 256, 00:19:23.225 "bdev_auto_examine": true, 00:19:23.225 "iobuf_small_cache_size": 128, 00:19:23.225 "iobuf_large_cache_size": 16 00:19:23.225 } 00:19:23.225 }, 00:19:23.225 { 00:19:23.225 "method": "bdev_raid_set_options", 00:19:23.225 "params": { 00:19:23.226 "process_window_size_kb": 1024 00:19:23.226 } 00:19:23.226 }, 00:19:23.226 { 00:19:23.226 "method": "bdev_iscsi_set_options", 00:19:23.226 "params": { 00:19:23.226 "timeout_sec": 30 00:19:23.226 } 00:19:23.226 }, 00:19:23.226 { 00:19:23.226 "method": "bdev_nvme_set_options", 00:19:23.226 "params": { 00:19:23.226 "action_on_timeout": "none", 00:19:23.226 "timeout_us": 0, 00:19:23.226 "timeout_admin_us": 0, 00:19:23.226 "keep_alive_timeout_ms": 10000, 00:19:23.226 "arbitration_burst": 0, 00:19:23.226 "low_priority_weight": 0, 00:19:23.226 "medium_priority_weight": 0, 00:19:23.226 "high_priority_weight": 0, 00:19:23.226 "nvme_adminq_poll_period_us": 10000, 00:19:23.226 "nvme_ioq_poll_period_us": 0, 00:19:23.226 "io_queue_requests": 512, 00:19:23.226 "delay_cmd_submit": true, 00:19:23.226 "transport_retry_count": 4, 00:19:23.226 "bdev_retry_count": 3, 00:19:23.226 "transport_ack_timeout": 0, 00:19:23.226 "ctrlr_loss_timeout_sec": 0, 00:19:23.226 "reconnect_delay_sec": 0, 00:19:23.226 "fast_io_fail_timeout_sec": 0, 00:19:23.226 "disable_auto_failback": false, 00:19:23.226 "generate_uuids": false, 00:19:23.226 "transport_tos": 0, 00:19:23.226 "nvme_error_stat": false, 00:19:23.226 "rdma_srq_size": 0, 00:19:23.226 "io_path_stat": false, 00:19:23.226 "allow_accel_sequence": false, 00:19:23.226 "rdma_max_cq_size": 0, 00:19:23.226 "rdma_cm_event_timeout_ms": 0, 00:19:23.226 "dhchap_digests": [ 00:19:23.226 "sha256", 00:19:23.226 "sha384", 00:19:23.226 "sha512" 00:19:23.226 ], 00:19:23.226 "dhchap_dhgroups": [ 00:19:23.226 "null", 00:19:23.226 "ffdhe2048", 00:19:23.226 "ffdhe3072", 00:19:23.226 "ffdhe4096", 00:19:23.226 "ffdhe6144", 00:19:23.226 "ffdhe8192" 00:19:23.226 ] 00:19:23.226 } 00:19:23.226 }, 00:19:23.226 { 00:19:23.226 "method": "bdev_nvme_attach_controller", 00:19:23.226 "params": { 00:19:23.226 "name": "TLSTEST", 00:19:23.226 "trtype": "TCP", 00:19:23.226 "adrfam": "IPv4", 00:19:23.226 "traddr": "10.0.0.2", 00:19:23.226 "trsvcid": "4420", 00:19:23.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.226 "prchk_reftag": false, 00:19:23.226 "prchk_guard": false, 00:19:23.226 "ctrlr_loss_timeout_sec": 0, 00:19:23.226 "reconnect_delay_sec": 0, 00:19:23.226 "fast_io_fail_timeout_sec": 0, 00:19:23.226 "psk": "/tmp/tmp.0KKFjE8252", 00:19:23.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:23.226 "hdgst": false, 00:19:23.226 "ddgst": false 00:19:23.226 } 00:19:23.226 }, 00:19:23.226 { 00:19:23.226 "method": "bdev_nvme_set_hotplug", 00:19:23.226 "params": { 00:19:23.226 "period_us": 100000, 00:19:23.226 "enable": false 00:19:23.226 } 00:19:23.226 }, 00:19:23.226 { 00:19:23.226 "method": "bdev_wait_for_examine" 00:19:23.226 } 00:19:23.226 ] 00:19:23.226 }, 00:19:23.226 { 00:19:23.226 "subsystem": "nbd", 00:19:23.226 "config": [] 00:19:23.226 } 00:19:23.226 ] 00:19:23.226 }' 00:19:23.226 15:00:08 -- target/tls.sh@199 -- # killprocess 3797176 00:19:23.226 15:00:08 -- common/autotest_common.sh@936 -- # '[' -z 3797176 ']' 00:19:23.226 15:00:08 -- common/autotest_common.sh@940 -- # kill -0 3797176 00:19:23.226 15:00:08 -- common/autotest_common.sh@941 -- # uname 00:19:23.226 15:00:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:23.226 15:00:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3797176 00:19:23.226 15:00:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:23.226 15:00:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:23.226 15:00:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3797176' 00:19:23.226 killing process with pid 3797176 00:19:23.226 15:00:08 -- common/autotest_common.sh@955 -- # kill 3797176 00:19:23.226 Received shutdown signal, test time was about 10.000000 seconds 00:19:23.226 00:19:23.226 Latency(us) 00:19:23.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.226 =================================================================================================================== 00:19:23.226 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:23.226 [2024-04-26 15:00:08.690174] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:23.226 15:00:08 -- common/autotest_common.sh@960 -- # wait 3797176 00:19:23.226 15:00:08 -- target/tls.sh@200 -- # killprocess 3796891 00:19:23.226 15:00:08 -- common/autotest_common.sh@936 -- # '[' -z 3796891 ']' 00:19:23.226 15:00:08 -- common/autotest_common.sh@940 -- # kill -0 3796891 00:19:23.226 15:00:08 -- common/autotest_common.sh@941 -- # uname 00:19:23.226 15:00:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:23.226 15:00:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3796891 00:19:23.226 15:00:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:23.226 15:00:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:23.226 15:00:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3796891' 00:19:23.226 killing process with pid 3796891 00:19:23.226 15:00:08 -- common/autotest_common.sh@955 -- # kill 3796891 00:19:23.226 [2024-04-26 15:00:08.931301] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:23.226 15:00:08 -- common/autotest_common.sh@960 -- # wait 3796891 00:19:23.485 15:00:09 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:23.485 15:00:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:23.485 15:00:09 -- target/tls.sh@203 -- # echo '{ 00:19:23.485 "subsystems": [ 00:19:23.485 { 00:19:23.485 "subsystem": "keyring", 00:19:23.485 "config": [] 00:19:23.485 }, 00:19:23.485 { 00:19:23.485 "subsystem": "iobuf", 00:19:23.485 "config": [ 00:19:23.485 { 00:19:23.485 "method": "iobuf_set_options", 00:19:23.485 "params": { 00:19:23.485 "small_pool_count": 8192, 00:19:23.485 "large_pool_count": 1024, 00:19:23.485 "small_bufsize": 8192, 00:19:23.485 "large_bufsize": 135168 00:19:23.485 } 00:19:23.485 } 00:19:23.485 ] 00:19:23.485 }, 00:19:23.485 { 00:19:23.485 "subsystem": "sock", 00:19:23.485 "config": [ 00:19:23.485 { 00:19:23.485 "method": "sock_impl_set_options", 00:19:23.485 "params": { 00:19:23.485 "impl_name": "posix", 00:19:23.485 "recv_buf_size": 2097152, 00:19:23.485 "send_buf_size": 2097152, 00:19:23.485 "enable_recv_pipe": true, 00:19:23.485 "enable_quickack": false, 00:19:23.485 "enable_placement_id": 0, 00:19:23.485 "enable_zerocopy_send_server": true, 00:19:23.485 "enable_zerocopy_send_client": false, 00:19:23.485 "zerocopy_threshold": 0, 00:19:23.485 "tls_version": 0, 00:19:23.485 "enable_ktls": false 00:19:23.485 } 00:19:23.485 }, 00:19:23.485 { 00:19:23.485 "method": "sock_impl_set_options", 00:19:23.485 "params": { 00:19:23.485 "impl_name": "ssl", 00:19:23.485 "recv_buf_size": 4096, 00:19:23.485 "send_buf_size": 4096, 00:19:23.485 "enable_recv_pipe": true, 00:19:23.485 "enable_quickack": false, 00:19:23.485 "enable_placement_id": 0, 00:19:23.485 "enable_zerocopy_send_server": true, 00:19:23.485 "enable_zerocopy_send_client": false, 00:19:23.485 "zerocopy_threshold": 0, 00:19:23.485 "tls_version": 0, 00:19:23.485 "enable_ktls": false 00:19:23.485 } 00:19:23.485 } 00:19:23.485 ] 00:19:23.485 }, 00:19:23.485 { 00:19:23.485 "subsystem": "vmd", 00:19:23.485 "config": [] 00:19:23.485 }, 00:19:23.485 { 00:19:23.485 "subsystem": "accel", 00:19:23.485 "config": [ 00:19:23.485 { 00:19:23.485 "method": "accel_set_options", 00:19:23.485 "params": { 00:19:23.485 "small_cache_size": 128, 00:19:23.485 "large_cache_size": 16, 00:19:23.485 "task_count": 2048, 00:19:23.485 "sequence_count": 2048, 00:19:23.485 "buf_count": 2048 00:19:23.486 } 00:19:23.486 } 00:19:23.486 ] 00:19:23.486 }, 00:19:23.486 { 00:19:23.486 "subsystem": "bdev", 00:19:23.486 "config": [ 00:19:23.486 { 00:19:23.486 "method": "bdev_set_options", 00:19:23.486 "params": { 00:19:23.486 "bdev_io_pool_size": 65535, 00:19:23.486 "bdev_io_cache_size": 256, 00:19:23.486 "bdev_auto_examine": true, 00:19:23.486 "iobuf_small_cache_size": 128, 00:19:23.486 "iobuf_large_cache_size": 16 00:19:23.486 } 00:19:23.486 }, 00:19:23.486 { 00:19:23.486 "method": "bdev_raid_set_options", 00:19:23.486 "params": { 00:19:23.486 "process_window_size_kb": 1024 00:19:23.486 } 00:19:23.486 }, 00:19:23.486 { 00:19:23.486 "method": "bdev_iscsi_set_options", 00:19:23.486 "params": { 00:19:23.486 "timeout_sec": 30 00:19:23.486 } 00:19:23.486 }, 00:19:23.486 { 00:19:23.486 "method": "bdev_nvme_set_options", 00:19:23.486 "params": { 00:19:23.486 "action_on_timeout": "none", 00:19:23.486 "timeout_us": 0, 00:19:23.486 "timeout_admin_us": 0, 00:19:23.486 "keep_alive_timeout_ms": 10000, 00:19:23.486 "arbitration_burst": 0, 00:19:23.486 "low_priority_weight": 0, 00:19:23.486 "medium_priority_weight": 0, 00:19:23.486 "high_priority_weight": 0, 00:19:23.486 "nvme_adminq_poll_period_us": 10000, 00:19:23.486 "nvme_ioq_poll_period_us": 0, 00:19:23.486 "io_queue_requests": 0, 00:19:23.486 "delay_cmd_submit": true, 00:19:23.486 "transport_retry_count": 4, 00:19:23.486 "bdev_retry_count": 3, 00:19:23.486 "transport_ack_timeout": 0, 00:19:23.486 "ctrlr_loss_timeout_sec": 0, 00:19:23.486 "reconnect_delay_sec": 0, 00:19:23.486 "fast_io_fail_timeout_sec": 0, 00:19:23.486 "disable_auto_failback": false, 00:19:23.486 "generate_uuids": false, 00:19:23.486 "transport_tos": 0, 00:19:23.486 "nvme_error_stat": false, 00:19:23.486 "rdma_srq_size": 0, 00:19:23.486 "io_path_stat": false, 00:19:23.486 "allow_accel_sequence": false, 00:19:23.486 "rdma_max_cq_size": 0, 00:19:23.486 "rdma_cm_event_timeout_ms": 0, 00:19:23.486 "dhchap_digests": [ 00:19:23.486 "sha256", 00:19:23.486 "sha384", 00:19:23.486 "sha512" 00:19:23.486 ], 00:19:23.486 "dhchap_dhgroups": [ 00:19:23.486 "null", 00:19:23.486 "ffdhe2048", 00:19:23.486 "ffdhe3072", 00:19:23.486 "ffdhe4096", 00:19:23.486 "ffdhe6144", 00:19:23.486 "ffdhe8192" 00:19:23.486 ] 00:19:23.486 } 00:19:23.486 }, 00:19:23.486 { 00:19:23.486 "method": "bdev_nvme_set_hotplug", 00:19:23.486 "params": { 00:19:23.486 "period_us": 100000, 00:19:23.486 "enable": false 00:19:23.486 } 00:19:23.486 }, 00:19:23.486 { 00:19:23.486 "method": "bdev_malloc_create", 00:19:23.486 "params": { 00:19:23.486 "name": "malloc0", 00:19:23.486 "num_blocks": 8192, 00:19:23.486 "block_size": 4096, 00:19:23.486 "physical_block_size": 4096, 00:19:23.486 "uuid": "c7e3c8df-c53f-4713-b21d-dd34ca31c110", 00:19:23.486 "optimal_io_boundary": 0 00:19:23.486 } 00:19:23.486 }, 00:19:23.486 { 00:19:23.486 "method": "bdev_wait_for_examine" 00:19:23.486 } 00:19:23.486 ] 00:19:23.486 }, 00:19:23.486 { 00:19:23.486 "subsystem": "nbd", 00:19:23.486 "config": [] 00:19:23.486 }, 00:19:23.486 { 00:19:23.486 "subsystem": "scheduler", 00:19:23.486 "config": [ 00:19:23.486 { 00:19:23.486 "method": "framework_set_scheduler", 00:19:23.486 "params": { 00:19:23.486 "name": "static" 00:19:23.486 } 00:19:23.486 } 00:19:23.486 ] 00:19:23.486 }, 00:19:23.486 { 00:19:23.486 "subsystem": "nvmf", 00:19:23.486 "config": [ 00:19:23.486 { 00:19:23.486 "method": "nvmf_set_config", 00:19:23.486 "params": { 00:19:23.486 "discovery_filter": "match_any", 00:19:23.486 "admin_cmd_passthru": { 00:19:23.486 "identify_ctrlr": false 00:19:23.486 } 00:19:23.486 } 00:19:23.486 }, 00:19:23.486 { 00:19:23.486 "method": "nvmf_set_max_subsystems", 00:19:23.486 "params": { 00:19:23.486 "max_subsystems": 1024 00:19:23.486 } 00:19:23.486 }, 00:19:23.486 { 00:19:23.486 "method": "nvmf_set_crdt", 00:19:23.486 "params": { 00:19:23.486 "crdt1": 0, 00:19:23.486 "crdt2": 0, 00:19:23.486 "crdt3": 0 00:19:23.486 } 00:19:23.486 }, 00:19:23.486 { 00:19:23.486 "method": "nvmf_create_transport", 00:19:23.486 "params": { 00:19:23.486 "trtype": "TCP", 00:19:23.486 "max_queue_depth": 128, 00:19:23.486 "max_io_qpairs_per_ctrlr": 127, 00:19:23.486 "in_capsule_data_size": 4096, 00:19:23.486 "max_io_size": 131072, 00:19:23.486 "io_unit_size": 131072, 00:19:23.486 "max_aq_depth": 128, 00:19:23.486 "num_shared_buffers": 511, 00:19:23.486 "buf_cache_size": 4294967295, 00:19:23.486 "dif_insert_or_strip": false, 00:19:23.486 "zcopy": false, 00:19:23.486 "c2h_success": false, 00:19:23.486 "sock_priority": 0, 00:19:23.486 "abort_timeout_sec": 1, 00:19:23.486 "ack_timeout": 0, 00:19:23.486 "data_wr_pool_size": 0 00:19:23.486 } 00:19:23.486 }, 00:19:23.486 { 00:19:23.486 "method": "nvmf_create_subsystem", 00:19:23.486 "params": { 00:19:23.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.486 "allow_any_host": false, 00:19:23.486 "serial_number": "SPDK00000000000001", 00:19:23.486 "model_number": "SPDK bdev Controller", 00:19:23.486 "max_namespaces": 10, 00:19:23.486 "min_cntlid": 1, 00:19:23.486 "max_cntlid": 65519, 00:19:23.486 "ana_reporting": false 00:19:23.486 } 00:19:23.486 }, 00:19:23.486 { 00:19:23.486 "method": "nvmf_subsystem_add_host", 00:19:23.486 "params": { 00:19:23.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.486 "host": "nqn.2016-06.io.spdk:host1", 00:19:23.486 "psk": "/tmp/tmp.0KKFjE8252" 00:19:23.486 } 00:19:23.486 }, 00:19:23.486 { 00:19:23.486 "method": "nvmf_subsystem_add_ns", 00:19:23.486 "params": { 00:19:23.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.486 "namespace": { 00:19:23.486 "nsid": 1, 00:19:23.486 "bdev_name": "malloc0", 00:19:23.486 "nguid": "C7E3C8DFC53F4713B21DDD34CA31C110", 00:19:23.486 "uuid": "c7e3c8df-c53f-4713-b21d-dd34ca31c110", 00:19:23.486 "no_auto_visible": false 00:19:23.486 } 00:19:23.486 } 00:19:23.486 }, 00:19:23.486 { 00:19:23.486 "method": "nvmf_subsystem_add_listener", 00:19:23.486 "params": { 00:19:23.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.486 "listen_address": { 00:19:23.486 "trtype": "TCP", 00:19:23.486 "adrfam": "IPv4", 00:19:23.486 "traddr": "10.0.0.2", 00:19:23.486 "trsvcid": "4420" 00:19:23.486 }, 00:19:23.486 "secure_channel": true 00:19:23.486 } 00:19:23.486 } 00:19:23.486 ] 00:19:23.486 } 00:19:23.486 ] 00:19:23.486 }' 00:19:23.486 15:00:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:23.486 15:00:09 -- common/autotest_common.sh@10 -- # set +x 00:19:23.486 15:00:09 -- nvmf/common.sh@470 -- # nvmfpid=3797904 00:19:23.486 15:00:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:23.486 15:00:09 -- nvmf/common.sh@471 -- # waitforlisten 3797904 00:19:23.486 15:00:09 -- common/autotest_common.sh@817 -- # '[' -z 3797904 ']' 00:19:23.486 15:00:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.486 15:00:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:23.486 15:00:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.486 15:00:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:23.486 15:00:09 -- common/autotest_common.sh@10 -- # set +x 00:19:23.486 [2024-04-26 15:00:09.220695] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:23.486 [2024-04-26 15:00:09.220786] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.744 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.744 [2024-04-26 15:00:09.259525] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:23.744 [2024-04-26 15:00:09.291977] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.744 [2024-04-26 15:00:09.383259] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.744 [2024-04-26 15:00:09.383331] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.744 [2024-04-26 15:00:09.383355] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.744 [2024-04-26 15:00:09.383370] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.744 [2024-04-26 15:00:09.383388] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.744 [2024-04-26 15:00:09.383473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.002 [2024-04-26 15:00:09.598373] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.002 [2024-04-26 15:00:09.614316] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:24.002 [2024-04-26 15:00:09.630359] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:24.002 [2024-04-26 15:00:09.640200] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.569 15:00:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:24.569 15:00:10 -- common/autotest_common.sh@850 -- # return 0 00:19:24.569 15:00:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:24.569 15:00:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:24.569 15:00:10 -- common/autotest_common.sh@10 -- # set +x 00:19:24.569 15:00:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.569 15:00:10 -- target/tls.sh@207 -- # bdevperf_pid=3798059 00:19:24.569 15:00:10 -- target/tls.sh@208 -- # waitforlisten 3798059 /var/tmp/bdevperf.sock 00:19:24.569 15:00:10 -- common/autotest_common.sh@817 -- # '[' -z 3798059 ']' 00:19:24.569 15:00:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.569 15:00:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:24.569 15:00:10 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:24.569 15:00:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.569 15:00:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:24.569 15:00:10 -- target/tls.sh@204 -- # echo '{ 00:19:24.569 "subsystems": [ 00:19:24.569 { 00:19:24.569 "subsystem": "keyring", 00:19:24.569 "config": [] 00:19:24.569 }, 00:19:24.569 { 00:19:24.569 "subsystem": "iobuf", 00:19:24.569 "config": [ 00:19:24.569 { 00:19:24.569 "method": "iobuf_set_options", 00:19:24.569 "params": { 00:19:24.569 "small_pool_count": 8192, 00:19:24.569 "large_pool_count": 1024, 00:19:24.569 "small_bufsize": 8192, 00:19:24.569 "large_bufsize": 135168 00:19:24.569 } 00:19:24.569 } 00:19:24.569 ] 00:19:24.569 }, 00:19:24.569 { 00:19:24.569 "subsystem": "sock", 00:19:24.569 "config": [ 00:19:24.569 { 00:19:24.569 "method": "sock_impl_set_options", 00:19:24.569 "params": { 00:19:24.569 "impl_name": "posix", 00:19:24.569 "recv_buf_size": 2097152, 00:19:24.569 "send_buf_size": 2097152, 00:19:24.569 "enable_recv_pipe": true, 00:19:24.569 "enable_quickack": false, 00:19:24.569 "enable_placement_id": 0, 00:19:24.569 "enable_zerocopy_send_server": true, 00:19:24.569 "enable_zerocopy_send_client": false, 00:19:24.569 "zerocopy_threshold": 0, 00:19:24.569 "tls_version": 0, 00:19:24.569 "enable_ktls": false 00:19:24.569 } 00:19:24.569 }, 00:19:24.569 { 00:19:24.569 "method": "sock_impl_set_options", 00:19:24.569 "params": { 00:19:24.569 "impl_name": "ssl", 00:19:24.569 "recv_buf_size": 4096, 00:19:24.569 "send_buf_size": 4096, 00:19:24.569 "enable_recv_pipe": true, 00:19:24.569 "enable_quickack": false, 00:19:24.569 "enable_placement_id": 0, 00:19:24.569 "enable_zerocopy_send_server": true, 00:19:24.569 "enable_zerocopy_send_client": false, 00:19:24.569 "zerocopy_threshold": 0, 00:19:24.569 "tls_version": 0, 00:19:24.569 "enable_ktls": false 00:19:24.569 } 00:19:24.569 } 00:19:24.569 ] 00:19:24.569 }, 00:19:24.569 { 00:19:24.569 "subsystem": "vmd", 00:19:24.569 "config": [] 00:19:24.569 }, 00:19:24.569 { 00:19:24.569 "subsystem": "accel", 00:19:24.569 "config": [ 00:19:24.569 { 00:19:24.569 "method": "accel_set_options", 00:19:24.569 "params": { 00:19:24.569 "small_cache_size": 128, 00:19:24.569 "large_cache_size": 16, 00:19:24.569 "task_count": 2048, 00:19:24.569 "sequence_count": 2048, 00:19:24.569 "buf_count": 2048 00:19:24.569 } 00:19:24.569 } 00:19:24.569 ] 00:19:24.569 }, 00:19:24.569 { 00:19:24.569 "subsystem": "bdev", 00:19:24.569 "config": [ 00:19:24.569 { 00:19:24.569 "method": "bdev_set_options", 00:19:24.569 "params": { 00:19:24.569 "bdev_io_pool_size": 65535, 00:19:24.569 "bdev_io_cache_size": 256, 00:19:24.569 "bdev_auto_examine": true, 00:19:24.569 "iobuf_small_cache_size": 128, 00:19:24.569 "iobuf_large_cache_size": 16 00:19:24.569 } 00:19:24.569 }, 00:19:24.569 { 00:19:24.569 "method": "bdev_raid_set_options", 00:19:24.569 "params": { 00:19:24.569 "process_window_size_kb": 1024 00:19:24.569 } 00:19:24.569 }, 00:19:24.569 { 00:19:24.569 "method": "bdev_iscsi_set_options", 00:19:24.569 "params": { 00:19:24.569 "timeout_sec": 30 00:19:24.569 } 00:19:24.569 }, 00:19:24.569 { 00:19:24.569 "method": "bdev_nvme_set_options", 00:19:24.569 "params": { 00:19:24.569 "action_on_timeout": "none", 00:19:24.569 "timeout_us": 0, 00:19:24.569 "timeout_admin_us": 0, 00:19:24.569 "keep_alive_timeout_ms": 10000, 00:19:24.569 "arbitration_burst": 0, 00:19:24.569 "low_priority_weight": 0, 00:19:24.569 "medium_priority_weight": 0, 00:19:24.569 "high_priority_weight": 0, 00:19:24.569 "nvme_adminq_poll_period_us": 10000, 00:19:24.569 "nvme_ioq_poll_period_us": 0, 00:19:24.569 "io_queue_requests": 512, 00:19:24.569 "delay_cmd_submit": true, 00:19:24.569 "transport_retry_count": 4, 00:19:24.569 "bdev_retry_count": 3, 00:19:24.569 "transport_ack_timeout": 0, 00:19:24.569 "ctrlr_loss_timeout_sec": 0, 00:19:24.569 "reconnect_delay_sec": 0, 00:19:24.569 "fast_io_fail_timeout_sec": 0, 00:19:24.569 "disable_auto_failback": false, 00:19:24.569 "generate_uuids": false, 00:19:24.569 "transport_tos": 0, 00:19:24.569 "nvme_error_stat": false, 00:19:24.569 "rdma_srq_size": 0, 00:19:24.569 "io_path_stat": false, 00:19:24.569 "allow_accel_sequence": false, 00:19:24.569 "rdma_max_cq_size": 0, 00:19:24.569 "rdma_cm_event_timeout_ms": 0, 00:19:24.569 "dhchap_digests": [ 00:19:24.569 "sha256", 00:19:24.569 "sha384", 00:19:24.569 "sha512" 00:19:24.569 ], 00:19:24.569 "dhchap_dhgroups": [ 00:19:24.569 "null", 00:19:24.569 "ffdhe2048", 00:19:24.569 "ffdhe3072", 00:19:24.569 "ffdhe4096", 00:19:24.569 "ffdhe6144", 00:19:24.569 "ffdhe8192" 00:19:24.569 ] 00:19:24.569 } 00:19:24.569 }, 00:19:24.569 { 00:19:24.569 "method": "bdev_nvme_attach_controller", 00:19:24.569 "params": { 00:19:24.569 "name": "TLSTEST", 00:19:24.569 "trtype": "TCP", 00:19:24.569 "adrfam": "IPv4", 00:19:24.569 "traddr": "10.0.0.2", 00:19:24.569 "trsvcid": "4420", 00:19:24.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.569 "prchk_reftag": false, 00:19:24.569 "prchk_guard": false, 00:19:24.569 "ctrlr_loss_timeout_sec": 0, 00:19:24.569 "reconnect_delay_sec": 0, 00:19:24.569 "fast_io_fail_timeout_sec": 0, 00:19:24.569 "psk": "/tmp/tmp.0KKFjE8252", 00:19:24.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.569 "hdgst": false, 00:19:24.569 "ddgst": false 00:19:24.569 } 00:19:24.569 }, 00:19:24.569 { 00:19:24.569 "method": "bdev_nvme_set_hotplug", 00:19:24.569 "params": { 00:19:24.569 "period_us": 100000, 00:19:24.569 "enable": false 00:19:24.569 } 00:19:24.569 }, 00:19:24.569 { 00:19:24.569 "method": "bdev_wait_for_examine" 00:19:24.569 } 00:19:24.569 ] 00:19:24.569 }, 00:19:24.569 { 00:19:24.569 "subsystem": "nbd", 00:19:24.569 "config": [] 00:19:24.569 } 00:19:24.569 ] 00:19:24.569 }' 00:19:24.569 15:00:10 -- common/autotest_common.sh@10 -- # set +x 00:19:24.569 [2024-04-26 15:00:10.269582] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:24.569 [2024-04-26 15:00:10.269671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3798059 ] 00:19:24.569 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.569 [2024-04-26 15:00:10.301638] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:24.829 [2024-04-26 15:00:10.330500] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.829 [2024-04-26 15:00:10.419082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.088 [2024-04-26 15:00:10.579478] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.088 [2024-04-26 15:00:10.579624] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:25.667 15:00:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:25.668 15:00:11 -- common/autotest_common.sh@850 -- # return 0 00:19:25.668 15:00:11 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:25.668 Running I/O for 10 seconds... 00:19:35.640 00:19:35.640 Latency(us) 00:19:35.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.640 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:35.640 Verification LBA range: start 0x0 length 0x2000 00:19:35.640 TLSTESTn1 : 10.02 3653.68 14.27 0.00 0.00 34971.55 6068.15 36700.16 00:19:35.640 =================================================================================================================== 00:19:35.640 Total : 3653.68 14.27 0.00 0.00 34971.55 6068.15 36700.16 00:19:35.640 0 00:19:35.640 15:00:21 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:35.640 15:00:21 -- target/tls.sh@214 -- # killprocess 3798059 00:19:35.640 15:00:21 -- common/autotest_common.sh@936 -- # '[' -z 3798059 ']' 00:19:35.640 15:00:21 -- common/autotest_common.sh@940 -- # kill -0 3798059 00:19:35.640 15:00:21 -- common/autotest_common.sh@941 -- # uname 00:19:35.640 15:00:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:35.640 15:00:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3798059 00:19:35.898 15:00:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:35.898 15:00:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:35.898 15:00:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3798059' 00:19:35.898 killing process with pid 3798059 00:19:35.898 15:00:21 -- common/autotest_common.sh@955 -- # kill 3798059 00:19:35.898 Received shutdown signal, test time was about 10.000000 seconds 00:19:35.898 00:19:35.898 Latency(us) 00:19:35.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.898 =================================================================================================================== 00:19:35.898 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.898 [2024-04-26 15:00:21.403044] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:35.898 15:00:21 -- common/autotest_common.sh@960 -- # wait 3798059 00:19:35.898 15:00:21 -- target/tls.sh@215 -- # killprocess 3797904 00:19:35.898 15:00:21 -- common/autotest_common.sh@936 -- # '[' -z 3797904 ']' 00:19:35.898 15:00:21 -- common/autotest_common.sh@940 -- # kill -0 3797904 00:19:35.898 15:00:21 -- common/autotest_common.sh@941 -- # uname 00:19:35.898 15:00:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:35.898 15:00:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3797904 00:19:36.157 15:00:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:36.157 15:00:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:36.157 15:00:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3797904' 00:19:36.157 killing process with pid 3797904 00:19:36.157 15:00:21 -- common/autotest_common.sh@955 -- # kill 3797904 00:19:36.157 [2024-04-26 15:00:21.657730] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:36.157 15:00:21 -- common/autotest_common.sh@960 -- # wait 3797904 00:19:36.417 15:00:21 -- target/tls.sh@218 -- # nvmfappstart 00:19:36.417 15:00:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:36.417 15:00:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:36.417 15:00:21 -- common/autotest_common.sh@10 -- # set +x 00:19:36.417 15:00:21 -- nvmf/common.sh@470 -- # nvmfpid=3799434 00:19:36.417 15:00:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:36.417 15:00:21 -- nvmf/common.sh@471 -- # waitforlisten 3799434 00:19:36.417 15:00:21 -- common/autotest_common.sh@817 -- # '[' -z 3799434 ']' 00:19:36.417 15:00:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.417 15:00:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:36.417 15:00:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.417 15:00:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:36.417 15:00:21 -- common/autotest_common.sh@10 -- # set +x 00:19:36.417 [2024-04-26 15:00:21.960038] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:36.417 [2024-04-26 15:00:21.960137] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.417 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.417 [2024-04-26 15:00:21.997519] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:36.417 [2024-04-26 15:00:22.029813] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.417 [2024-04-26 15:00:22.114505] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.417 [2024-04-26 15:00:22.114576] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.417 [2024-04-26 15:00:22.114603] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.417 [2024-04-26 15:00:22.114618] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.417 [2024-04-26 15:00:22.114630] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.417 [2024-04-26 15:00:22.114664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.685 15:00:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:36.685 15:00:22 -- common/autotest_common.sh@850 -- # return 0 00:19:36.685 15:00:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:36.685 15:00:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:36.685 15:00:22 -- common/autotest_common.sh@10 -- # set +x 00:19:36.685 15:00:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.685 15:00:22 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.0KKFjE8252 00:19:36.685 15:00:22 -- target/tls.sh@49 -- # local key=/tmp/tmp.0KKFjE8252 00:19:36.685 15:00:22 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:36.944 [2024-04-26 15:00:22.532828] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.944 15:00:22 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:37.201 15:00:22 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:37.459 [2024-04-26 15:00:23.066235] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:37.459 [2024-04-26 15:00:23.066533] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.459 15:00:23 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:37.717 malloc0 00:19:37.717 15:00:23 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:37.975 15:00:23 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0KKFjE8252 00:19:38.232 [2024-04-26 15:00:23.835607] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:38.232 15:00:23 -- target/tls.sh@222 -- # bdevperf_pid=3799684 00:19:38.232 15:00:23 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:38.232 15:00:23 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:38.232 15:00:23 -- target/tls.sh@225 -- # waitforlisten 3799684 /var/tmp/bdevperf.sock 00:19:38.232 15:00:23 -- common/autotest_common.sh@817 -- # '[' -z 3799684 ']' 00:19:38.232 15:00:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.232 15:00:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:38.232 15:00:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.232 15:00:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:38.232 15:00:23 -- common/autotest_common.sh@10 -- # set +x 00:19:38.232 [2024-04-26 15:00:23.898699] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:38.232 [2024-04-26 15:00:23.898781] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3799684 ] 00:19:38.232 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.232 [2024-04-26 15:00:23.930164] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:38.232 [2024-04-26 15:00:23.961412] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.493 [2024-04-26 15:00:24.050528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.493 15:00:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:38.493 15:00:24 -- common/autotest_common.sh@850 -- # return 0 00:19:38.493 15:00:24 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0KKFjE8252 00:19:38.752 15:00:24 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:39.010 [2024-04-26 15:00:24.623329] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.010 nvme0n1 00:19:39.010 15:00:24 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:39.266 Running I/O for 1 seconds... 00:19:40.199 00:19:40.199 Latency(us) 00:19:40.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.199 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:40.199 Verification LBA range: start 0x0 length 0x2000 00:19:40.199 nvme0n1 : 1.03 3203.88 12.52 0.00 0.00 39475.65 7475.96 30292.20 00:19:40.199 =================================================================================================================== 00:19:40.199 Total : 3203.88 12.52 0.00 0.00 39475.65 7475.96 30292.20 00:19:40.199 0 00:19:40.199 15:00:25 -- target/tls.sh@234 -- # killprocess 3799684 00:19:40.199 15:00:25 -- common/autotest_common.sh@936 -- # '[' -z 3799684 ']' 00:19:40.199 15:00:25 -- common/autotest_common.sh@940 -- # kill -0 3799684 00:19:40.199 15:00:25 -- common/autotest_common.sh@941 -- # uname 00:19:40.199 15:00:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:40.199 15:00:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3799684 00:19:40.199 15:00:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:40.199 15:00:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:40.199 15:00:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3799684' 00:19:40.199 killing process with pid 3799684 00:19:40.199 15:00:25 -- common/autotest_common.sh@955 -- # kill 3799684 00:19:40.199 Received shutdown signal, test time was about 1.000000 seconds 00:19:40.199 00:19:40.199 Latency(us) 00:19:40.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.199 =================================================================================================================== 00:19:40.199 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.199 15:00:25 -- common/autotest_common.sh@960 -- # wait 3799684 00:19:40.458 15:00:26 -- target/tls.sh@235 -- # killprocess 3799434 00:19:40.458 15:00:26 -- common/autotest_common.sh@936 -- # '[' -z 3799434 ']' 00:19:40.458 15:00:26 -- common/autotest_common.sh@940 -- # kill -0 3799434 00:19:40.458 15:00:26 -- common/autotest_common.sh@941 -- # uname 00:19:40.458 15:00:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:40.458 15:00:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3799434 00:19:40.458 15:00:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:40.458 15:00:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:40.458 15:00:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3799434' 00:19:40.458 killing process with pid 3799434 00:19:40.458 15:00:26 -- common/autotest_common.sh@955 -- # kill 3799434 00:19:40.458 [2024-04-26 15:00:26.129737] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:40.458 15:00:26 -- common/autotest_common.sh@960 -- # wait 3799434 00:19:40.716 15:00:26 -- target/tls.sh@238 -- # nvmfappstart 00:19:40.716 15:00:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:40.716 15:00:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:40.716 15:00:26 -- common/autotest_common.sh@10 -- # set +x 00:19:40.716 15:00:26 -- nvmf/common.sh@470 -- # nvmfpid=3799996 00:19:40.716 15:00:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:40.716 15:00:26 -- nvmf/common.sh@471 -- # waitforlisten 3799996 00:19:40.716 15:00:26 -- common/autotest_common.sh@817 -- # '[' -z 3799996 ']' 00:19:40.716 15:00:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.716 15:00:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:40.716 15:00:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.716 15:00:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:40.716 15:00:26 -- common/autotest_common.sh@10 -- # set +x 00:19:40.716 [2024-04-26 15:00:26.400880] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:40.716 [2024-04-26 15:00:26.400950] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.716 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.716 [2024-04-26 15:00:26.437351] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:40.975 [2024-04-26 15:00:26.468891] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.975 [2024-04-26 15:00:26.557869] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.975 [2024-04-26 15:00:26.557934] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.975 [2024-04-26 15:00:26.557959] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.975 [2024-04-26 15:00:26.557972] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.975 [2024-04-26 15:00:26.557984] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.975 [2024-04-26 15:00:26.558016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.975 15:00:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:40.975 15:00:26 -- common/autotest_common.sh@850 -- # return 0 00:19:40.975 15:00:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:40.975 15:00:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:40.975 15:00:26 -- common/autotest_common.sh@10 -- # set +x 00:19:40.975 15:00:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.975 15:00:26 -- target/tls.sh@239 -- # rpc_cmd 00:19:40.975 15:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.975 15:00:26 -- common/autotest_common.sh@10 -- # set +x 00:19:40.975 [2024-04-26 15:00:26.691053] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.975 malloc0 00:19:41.234 [2024-04-26 15:00:26.722523] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:41.234 [2024-04-26 15:00:26.722798] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.234 15:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.234 15:00:26 -- target/tls.sh@252 -- # bdevperf_pid=3800024 00:19:41.234 15:00:26 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:41.234 15:00:26 -- target/tls.sh@254 -- # waitforlisten 3800024 /var/tmp/bdevperf.sock 00:19:41.234 15:00:26 -- common/autotest_common.sh@817 -- # '[' -z 3800024 ']' 00:19:41.234 15:00:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.234 15:00:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:41.234 15:00:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.234 15:00:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:41.234 15:00:26 -- common/autotest_common.sh@10 -- # set +x 00:19:41.234 [2024-04-26 15:00:26.792860] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:41.234 [2024-04-26 15:00:26.792921] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3800024 ] 00:19:41.234 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.234 [2024-04-26 15:00:26.825251] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:41.234 [2024-04-26 15:00:26.855090] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.234 [2024-04-26 15:00:26.943689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.492 15:00:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:41.492 15:00:27 -- common/autotest_common.sh@850 -- # return 0 00:19:41.492 15:00:27 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0KKFjE8252 00:19:41.749 15:00:27 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:42.007 [2024-04-26 15:00:27.530858] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.007 nvme0n1 00:19:42.007 15:00:27 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:42.007 Running I/O for 1 seconds... 00:19:43.380 00:19:43.380 Latency(us) 00:19:43.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.380 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:43.380 Verification LBA range: start 0x0 length 0x2000 00:19:43.380 nvme0n1 : 1.03 2963.14 11.57 0.00 0.00 42661.17 9757.58 55535.69 00:19:43.380 =================================================================================================================== 00:19:43.380 Total : 2963.14 11.57 0.00 0.00 42661.17 9757.58 55535.69 00:19:43.380 0 00:19:43.380 15:00:28 -- target/tls.sh@263 -- # rpc_cmd save_config 00:19:43.380 15:00:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.380 15:00:28 -- common/autotest_common.sh@10 -- # set +x 00:19:43.380 15:00:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.380 15:00:28 -- target/tls.sh@263 -- # tgtcfg='{ 00:19:43.380 "subsystems": [ 00:19:43.380 { 00:19:43.380 "subsystem": "keyring", 00:19:43.380 "config": [ 00:19:43.380 { 00:19:43.380 "method": "keyring_file_add_key", 00:19:43.380 "params": { 00:19:43.380 "name": "key0", 00:19:43.380 "path": "/tmp/tmp.0KKFjE8252" 00:19:43.380 } 00:19:43.380 } 00:19:43.380 ] 00:19:43.380 }, 00:19:43.380 { 00:19:43.380 "subsystem": "iobuf", 00:19:43.380 "config": [ 00:19:43.380 { 00:19:43.380 "method": "iobuf_set_options", 00:19:43.380 "params": { 00:19:43.380 "small_pool_count": 8192, 00:19:43.380 "large_pool_count": 1024, 00:19:43.380 "small_bufsize": 8192, 00:19:43.380 "large_bufsize": 135168 00:19:43.380 } 00:19:43.380 } 00:19:43.380 ] 00:19:43.380 }, 00:19:43.380 { 00:19:43.380 "subsystem": "sock", 00:19:43.380 "config": [ 00:19:43.380 { 00:19:43.380 "method": "sock_impl_set_options", 00:19:43.380 "params": { 00:19:43.380 "impl_name": "posix", 00:19:43.380 "recv_buf_size": 2097152, 00:19:43.380 "send_buf_size": 2097152, 00:19:43.380 "enable_recv_pipe": true, 00:19:43.380 "enable_quickack": false, 00:19:43.380 "enable_placement_id": 0, 00:19:43.380 "enable_zerocopy_send_server": true, 00:19:43.380 "enable_zerocopy_send_client": false, 00:19:43.380 "zerocopy_threshold": 0, 00:19:43.380 "tls_version": 0, 00:19:43.380 "enable_ktls": false 00:19:43.380 } 00:19:43.380 }, 00:19:43.380 { 00:19:43.380 "method": "sock_impl_set_options", 00:19:43.380 "params": { 00:19:43.380 "impl_name": "ssl", 00:19:43.380 "recv_buf_size": 4096, 00:19:43.380 "send_buf_size": 4096, 00:19:43.380 "enable_recv_pipe": true, 00:19:43.380 "enable_quickack": false, 00:19:43.380 "enable_placement_id": 0, 00:19:43.380 "enable_zerocopy_send_server": true, 00:19:43.380 "enable_zerocopy_send_client": false, 00:19:43.380 "zerocopy_threshold": 0, 00:19:43.380 "tls_version": 0, 00:19:43.380 "enable_ktls": false 00:19:43.380 } 00:19:43.380 } 00:19:43.380 ] 00:19:43.380 }, 00:19:43.380 { 00:19:43.380 "subsystem": "vmd", 00:19:43.380 "config": [] 00:19:43.380 }, 00:19:43.380 { 00:19:43.380 "subsystem": "accel", 00:19:43.380 "config": [ 00:19:43.380 { 00:19:43.380 "method": "accel_set_options", 00:19:43.380 "params": { 00:19:43.380 "small_cache_size": 128, 00:19:43.380 "large_cache_size": 16, 00:19:43.380 "task_count": 2048, 00:19:43.380 "sequence_count": 2048, 00:19:43.380 "buf_count": 2048 00:19:43.380 } 00:19:43.380 } 00:19:43.380 ] 00:19:43.380 }, 00:19:43.380 { 00:19:43.380 "subsystem": "bdev", 00:19:43.380 "config": [ 00:19:43.380 { 00:19:43.380 "method": "bdev_set_options", 00:19:43.380 "params": { 00:19:43.380 "bdev_io_pool_size": 65535, 00:19:43.380 "bdev_io_cache_size": 256, 00:19:43.380 "bdev_auto_examine": true, 00:19:43.380 "iobuf_small_cache_size": 128, 00:19:43.380 "iobuf_large_cache_size": 16 00:19:43.380 } 00:19:43.380 }, 00:19:43.380 { 00:19:43.380 "method": "bdev_raid_set_options", 00:19:43.380 "params": { 00:19:43.380 "process_window_size_kb": 1024 00:19:43.380 } 00:19:43.380 }, 00:19:43.380 { 00:19:43.380 "method": "bdev_iscsi_set_options", 00:19:43.380 "params": { 00:19:43.380 "timeout_sec": 30 00:19:43.380 } 00:19:43.380 }, 00:19:43.380 { 00:19:43.380 "method": "bdev_nvme_set_options", 00:19:43.380 "params": { 00:19:43.380 "action_on_timeout": "none", 00:19:43.380 "timeout_us": 0, 00:19:43.380 "timeout_admin_us": 0, 00:19:43.380 "keep_alive_timeout_ms": 10000, 00:19:43.380 "arbitration_burst": 0, 00:19:43.380 "low_priority_weight": 0, 00:19:43.380 "medium_priority_weight": 0, 00:19:43.380 "high_priority_weight": 0, 00:19:43.380 "nvme_adminq_poll_period_us": 10000, 00:19:43.380 "nvme_ioq_poll_period_us": 0, 00:19:43.380 "io_queue_requests": 0, 00:19:43.380 "delay_cmd_submit": true, 00:19:43.380 "transport_retry_count": 4, 00:19:43.380 "bdev_retry_count": 3, 00:19:43.380 "transport_ack_timeout": 0, 00:19:43.380 "ctrlr_loss_timeout_sec": 0, 00:19:43.380 "reconnect_delay_sec": 0, 00:19:43.380 "fast_io_fail_timeout_sec": 0, 00:19:43.380 "disable_auto_failback": false, 00:19:43.380 "generate_uuids": false, 00:19:43.380 "transport_tos": 0, 00:19:43.380 "nvme_error_stat": false, 00:19:43.380 "rdma_srq_size": 0, 00:19:43.380 "io_path_stat": false, 00:19:43.380 "allow_accel_sequence": false, 00:19:43.380 "rdma_max_cq_size": 0, 00:19:43.380 "rdma_cm_event_timeout_ms": 0, 00:19:43.380 "dhchap_digests": [ 00:19:43.380 "sha256", 00:19:43.380 "sha384", 00:19:43.380 "sha512" 00:19:43.380 ], 00:19:43.380 "dhchap_dhgroups": [ 00:19:43.380 "null", 00:19:43.380 "ffdhe2048", 00:19:43.380 "ffdhe3072", 00:19:43.380 "ffdhe4096", 00:19:43.380 "ffdhe6144", 00:19:43.380 "ffdhe8192" 00:19:43.380 ] 00:19:43.380 } 00:19:43.380 }, 00:19:43.380 { 00:19:43.380 "method": "bdev_nvme_set_hotplug", 00:19:43.380 "params": { 00:19:43.380 "period_us": 100000, 00:19:43.380 "enable": false 00:19:43.380 } 00:19:43.380 }, 00:19:43.380 { 00:19:43.380 "method": "bdev_malloc_create", 00:19:43.380 "params": { 00:19:43.380 "name": "malloc0", 00:19:43.380 "num_blocks": 8192, 00:19:43.380 "block_size": 4096, 00:19:43.380 "physical_block_size": 4096, 00:19:43.381 "uuid": "b3355182-7c4c-4eac-ad14-77e2be85f501", 00:19:43.381 "optimal_io_boundary": 0 00:19:43.381 } 00:19:43.381 }, 00:19:43.381 { 00:19:43.381 "method": "bdev_wait_for_examine" 00:19:43.381 } 00:19:43.381 ] 00:19:43.381 }, 00:19:43.381 { 00:19:43.381 "subsystem": "nbd", 00:19:43.381 "config": [] 00:19:43.381 }, 00:19:43.381 { 00:19:43.381 "subsystem": "scheduler", 00:19:43.381 "config": [ 00:19:43.381 { 00:19:43.381 "method": "framework_set_scheduler", 00:19:43.381 "params": { 00:19:43.381 "name": "static" 00:19:43.381 } 00:19:43.381 } 00:19:43.381 ] 00:19:43.381 }, 00:19:43.381 { 00:19:43.381 "subsystem": "nvmf", 00:19:43.381 "config": [ 00:19:43.381 { 00:19:43.381 "method": "nvmf_set_config", 00:19:43.381 "params": { 00:19:43.381 "discovery_filter": "match_any", 00:19:43.381 "admin_cmd_passthru": { 00:19:43.381 "identify_ctrlr": false 00:19:43.381 } 00:19:43.381 } 00:19:43.381 }, 00:19:43.381 { 00:19:43.381 "method": "nvmf_set_max_subsystems", 00:19:43.381 "params": { 00:19:43.381 "max_subsystems": 1024 00:19:43.381 } 00:19:43.381 }, 00:19:43.381 { 00:19:43.381 "method": "nvmf_set_crdt", 00:19:43.381 "params": { 00:19:43.381 "crdt1": 0, 00:19:43.381 "crdt2": 0, 00:19:43.381 "crdt3": 0 00:19:43.381 } 00:19:43.381 }, 00:19:43.381 { 00:19:43.381 "method": "nvmf_create_transport", 00:19:43.381 "params": { 00:19:43.381 "trtype": "TCP", 00:19:43.381 "max_queue_depth": 128, 00:19:43.381 "max_io_qpairs_per_ctrlr": 127, 00:19:43.381 "in_capsule_data_size": 4096, 00:19:43.381 "max_io_size": 131072, 00:19:43.381 "io_unit_size": 131072, 00:19:43.381 "max_aq_depth": 128, 00:19:43.381 "num_shared_buffers": 511, 00:19:43.381 "buf_cache_size": 4294967295, 00:19:43.381 "dif_insert_or_strip": false, 00:19:43.381 "zcopy": false, 00:19:43.381 "c2h_success": false, 00:19:43.381 "sock_priority": 0, 00:19:43.381 "abort_timeout_sec": 1, 00:19:43.381 "ack_timeout": 0, 00:19:43.381 "data_wr_pool_size": 0 00:19:43.381 } 00:19:43.381 }, 00:19:43.381 { 00:19:43.381 "method": "nvmf_create_subsystem", 00:19:43.381 "params": { 00:19:43.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.381 "allow_any_host": false, 00:19:43.381 "serial_number": "00000000000000000000", 00:19:43.381 "model_number": "SPDK bdev Controller", 00:19:43.381 "max_namespaces": 32, 00:19:43.381 "min_cntlid": 1, 00:19:43.381 "max_cntlid": 65519, 00:19:43.381 "ana_reporting": false 00:19:43.381 } 00:19:43.381 }, 00:19:43.381 { 00:19:43.381 "method": "nvmf_subsystem_add_host", 00:19:43.381 "params": { 00:19:43.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.381 "host": "nqn.2016-06.io.spdk:host1", 00:19:43.381 "psk": "key0" 00:19:43.381 } 00:19:43.381 }, 00:19:43.381 { 00:19:43.381 "method": "nvmf_subsystem_add_ns", 00:19:43.381 "params": { 00:19:43.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.381 "namespace": { 00:19:43.381 "nsid": 1, 00:19:43.381 "bdev_name": "malloc0", 00:19:43.381 "nguid": "B33551827C4C4EACAD1477E2BE85F501", 00:19:43.381 "uuid": "b3355182-7c4c-4eac-ad14-77e2be85f501", 00:19:43.381 "no_auto_visible": false 00:19:43.381 } 00:19:43.381 } 00:19:43.381 }, 00:19:43.381 { 00:19:43.381 "method": "nvmf_subsystem_add_listener", 00:19:43.381 "params": { 00:19:43.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.381 "listen_address": { 00:19:43.381 "trtype": "TCP", 00:19:43.381 "adrfam": "IPv4", 00:19:43.381 "traddr": "10.0.0.2", 00:19:43.381 "trsvcid": "4420" 00:19:43.381 }, 00:19:43.381 "secure_channel": true 00:19:43.381 } 00:19:43.381 } 00:19:43.381 ] 00:19:43.381 } 00:19:43.381 ] 00:19:43.381 }' 00:19:43.381 15:00:28 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:43.637 15:00:29 -- target/tls.sh@264 -- # bperfcfg='{ 00:19:43.637 "subsystems": [ 00:19:43.637 { 00:19:43.637 "subsystem": "keyring", 00:19:43.637 "config": [ 00:19:43.637 { 00:19:43.637 "method": "keyring_file_add_key", 00:19:43.637 "params": { 00:19:43.637 "name": "key0", 00:19:43.637 "path": "/tmp/tmp.0KKFjE8252" 00:19:43.637 } 00:19:43.637 } 00:19:43.637 ] 00:19:43.637 }, 00:19:43.637 { 00:19:43.637 "subsystem": "iobuf", 00:19:43.637 "config": [ 00:19:43.637 { 00:19:43.638 "method": "iobuf_set_options", 00:19:43.638 "params": { 00:19:43.638 "small_pool_count": 8192, 00:19:43.638 "large_pool_count": 1024, 00:19:43.638 "small_bufsize": 8192, 00:19:43.638 "large_bufsize": 135168 00:19:43.638 } 00:19:43.638 } 00:19:43.638 ] 00:19:43.638 }, 00:19:43.638 { 00:19:43.638 "subsystem": "sock", 00:19:43.638 "config": [ 00:19:43.638 { 00:19:43.638 "method": "sock_impl_set_options", 00:19:43.638 "params": { 00:19:43.638 "impl_name": "posix", 00:19:43.638 "recv_buf_size": 2097152, 00:19:43.638 "send_buf_size": 2097152, 00:19:43.638 "enable_recv_pipe": true, 00:19:43.638 "enable_quickack": false, 00:19:43.638 "enable_placement_id": 0, 00:19:43.638 "enable_zerocopy_send_server": true, 00:19:43.638 "enable_zerocopy_send_client": false, 00:19:43.638 "zerocopy_threshold": 0, 00:19:43.638 "tls_version": 0, 00:19:43.638 "enable_ktls": false 00:19:43.638 } 00:19:43.638 }, 00:19:43.638 { 00:19:43.638 "method": "sock_impl_set_options", 00:19:43.638 "params": { 00:19:43.638 "impl_name": "ssl", 00:19:43.638 "recv_buf_size": 4096, 00:19:43.638 "send_buf_size": 4096, 00:19:43.638 "enable_recv_pipe": true, 00:19:43.638 "enable_quickack": false, 00:19:43.638 "enable_placement_id": 0, 00:19:43.638 "enable_zerocopy_send_server": true, 00:19:43.638 "enable_zerocopy_send_client": false, 00:19:43.638 "zerocopy_threshold": 0, 00:19:43.638 "tls_version": 0, 00:19:43.638 "enable_ktls": false 00:19:43.638 } 00:19:43.638 } 00:19:43.638 ] 00:19:43.638 }, 00:19:43.638 { 00:19:43.638 "subsystem": "vmd", 00:19:43.638 "config": [] 00:19:43.638 }, 00:19:43.638 { 00:19:43.638 "subsystem": "accel", 00:19:43.638 "config": [ 00:19:43.638 { 00:19:43.638 "method": "accel_set_options", 00:19:43.638 "params": { 00:19:43.638 "small_cache_size": 128, 00:19:43.638 "large_cache_size": 16, 00:19:43.638 "task_count": 2048, 00:19:43.638 "sequence_count": 2048, 00:19:43.638 "buf_count": 2048 00:19:43.638 } 00:19:43.638 } 00:19:43.638 ] 00:19:43.638 }, 00:19:43.638 { 00:19:43.638 "subsystem": "bdev", 00:19:43.638 "config": [ 00:19:43.638 { 00:19:43.638 "method": "bdev_set_options", 00:19:43.638 "params": { 00:19:43.638 "bdev_io_pool_size": 65535, 00:19:43.638 "bdev_io_cache_size": 256, 00:19:43.638 "bdev_auto_examine": true, 00:19:43.638 "iobuf_small_cache_size": 128, 00:19:43.638 "iobuf_large_cache_size": 16 00:19:43.638 } 00:19:43.638 }, 00:19:43.638 { 00:19:43.638 "method": "bdev_raid_set_options", 00:19:43.638 "params": { 00:19:43.638 "process_window_size_kb": 1024 00:19:43.638 } 00:19:43.638 }, 00:19:43.638 { 00:19:43.638 "method": "bdev_iscsi_set_options", 00:19:43.638 "params": { 00:19:43.638 "timeout_sec": 30 00:19:43.638 } 00:19:43.638 }, 00:19:43.638 { 00:19:43.638 "method": "bdev_nvme_set_options", 00:19:43.638 "params": { 00:19:43.638 "action_on_timeout": "none", 00:19:43.638 "timeout_us": 0, 00:19:43.638 "timeout_admin_us": 0, 00:19:43.638 "keep_alive_timeout_ms": 10000, 00:19:43.638 "arbitration_burst": 0, 00:19:43.638 "low_priority_weight": 0, 00:19:43.638 "medium_priority_weight": 0, 00:19:43.638 "high_priority_weight": 0, 00:19:43.638 "nvme_adminq_poll_period_us": 10000, 00:19:43.638 "nvme_ioq_poll_period_us": 0, 00:19:43.638 "io_queue_requests": 512, 00:19:43.638 "delay_cmd_submit": true, 00:19:43.638 "transport_retry_count": 4, 00:19:43.638 "bdev_retry_count": 3, 00:19:43.638 "transport_ack_timeout": 0, 00:19:43.638 "ctrlr_loss_timeout_sec": 0, 00:19:43.638 "reconnect_delay_sec": 0, 00:19:43.638 "fast_io_fail_timeout_sec": 0, 00:19:43.638 "disable_auto_failback": false, 00:19:43.638 "generate_uuids": false, 00:19:43.638 "transport_tos": 0, 00:19:43.638 "nvme_error_stat": false, 00:19:43.638 "rdma_srq_size": 0, 00:19:43.638 "io_path_stat": false, 00:19:43.638 "allow_accel_sequence": false, 00:19:43.638 "rdma_max_cq_size": 0, 00:19:43.638 "rdma_cm_event_timeout_ms": 0, 00:19:43.638 "dhchap_digests": [ 00:19:43.638 "sha256", 00:19:43.638 "sha384", 00:19:43.638 "sha512" 00:19:43.638 ], 00:19:43.638 "dhchap_dhgroups": [ 00:19:43.638 "null", 00:19:43.638 "ffdhe2048", 00:19:43.638 "ffdhe3072", 00:19:43.638 "ffdhe4096", 00:19:43.638 "ffdhe6144", 00:19:43.638 "ffdhe8192" 00:19:43.638 ] 00:19:43.638 } 00:19:43.638 }, 00:19:43.638 { 00:19:43.638 "method": "bdev_nvme_attach_controller", 00:19:43.638 "params": { 00:19:43.638 "name": "nvme0", 00:19:43.638 "trtype": "TCP", 00:19:43.638 "adrfam": "IPv4", 00:19:43.638 "traddr": "10.0.0.2", 00:19:43.638 "trsvcid": "4420", 00:19:43.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.638 "prchk_reftag": false, 00:19:43.638 "prchk_guard": false, 00:19:43.638 "ctrlr_loss_timeout_sec": 0, 00:19:43.638 "reconnect_delay_sec": 0, 00:19:43.638 "fast_io_fail_timeout_sec": 0, 00:19:43.638 "psk": "key0", 00:19:43.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.638 "hdgst": false, 00:19:43.638 "ddgst": false 00:19:43.638 } 00:19:43.638 }, 00:19:43.638 { 00:19:43.638 "method": "bdev_nvme_set_hotplug", 00:19:43.638 "params": { 00:19:43.638 "period_us": 100000, 00:19:43.638 "enable": false 00:19:43.638 } 00:19:43.638 }, 00:19:43.638 { 00:19:43.638 "method": "bdev_enable_histogram", 00:19:43.638 "params": { 00:19:43.638 "name": "nvme0n1", 00:19:43.638 "enable": true 00:19:43.638 } 00:19:43.638 }, 00:19:43.638 { 00:19:43.638 "method": "bdev_wait_for_examine" 00:19:43.638 } 00:19:43.638 ] 00:19:43.638 }, 00:19:43.638 { 00:19:43.638 "subsystem": "nbd", 00:19:43.638 "config": [] 00:19:43.638 } 00:19:43.638 ] 00:19:43.638 }' 00:19:43.638 15:00:29 -- target/tls.sh@266 -- # killprocess 3800024 00:19:43.638 15:00:29 -- common/autotest_common.sh@936 -- # '[' -z 3800024 ']' 00:19:43.638 15:00:29 -- common/autotest_common.sh@940 -- # kill -0 3800024 00:19:43.638 15:00:29 -- common/autotest_common.sh@941 -- # uname 00:19:43.638 15:00:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:43.638 15:00:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3800024 00:19:43.638 15:00:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:43.638 15:00:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:43.638 15:00:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3800024' 00:19:43.638 killing process with pid 3800024 00:19:43.638 15:00:29 -- common/autotest_common.sh@955 -- # kill 3800024 00:19:43.638 Received shutdown signal, test time was about 1.000000 seconds 00:19:43.638 00:19:43.638 Latency(us) 00:19:43.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.638 =================================================================================================================== 00:19:43.638 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:43.638 15:00:29 -- common/autotest_common.sh@960 -- # wait 3800024 00:19:43.895 15:00:29 -- target/tls.sh@267 -- # killprocess 3799996 00:19:43.895 15:00:29 -- common/autotest_common.sh@936 -- # '[' -z 3799996 ']' 00:19:43.895 15:00:29 -- common/autotest_common.sh@940 -- # kill -0 3799996 00:19:43.895 15:00:29 -- common/autotest_common.sh@941 -- # uname 00:19:43.895 15:00:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:43.895 15:00:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3799996 00:19:43.895 15:00:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:43.895 15:00:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:43.895 15:00:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3799996' 00:19:43.895 killing process with pid 3799996 00:19:43.895 15:00:29 -- common/autotest_common.sh@955 -- # kill 3799996 00:19:43.895 15:00:29 -- common/autotest_common.sh@960 -- # wait 3799996 00:19:44.154 15:00:29 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:19:44.154 15:00:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:44.154 15:00:29 -- target/tls.sh@269 -- # echo '{ 00:19:44.154 "subsystems": [ 00:19:44.154 { 00:19:44.154 "subsystem": "keyring", 00:19:44.154 "config": [ 00:19:44.154 { 00:19:44.154 "method": "keyring_file_add_key", 00:19:44.154 "params": { 00:19:44.154 "name": "key0", 00:19:44.154 "path": "/tmp/tmp.0KKFjE8252" 00:19:44.154 } 00:19:44.154 } 00:19:44.154 ] 00:19:44.154 }, 00:19:44.154 { 00:19:44.154 "subsystem": "iobuf", 00:19:44.154 "config": [ 00:19:44.154 { 00:19:44.154 "method": "iobuf_set_options", 00:19:44.154 "params": { 00:19:44.154 "small_pool_count": 8192, 00:19:44.154 "large_pool_count": 1024, 00:19:44.154 "small_bufsize": 8192, 00:19:44.154 "large_bufsize": 135168 00:19:44.154 } 00:19:44.154 } 00:19:44.154 ] 00:19:44.154 }, 00:19:44.154 { 00:19:44.154 "subsystem": "sock", 00:19:44.154 "config": [ 00:19:44.154 { 00:19:44.154 "method": "sock_impl_set_options", 00:19:44.154 "params": { 00:19:44.154 "impl_name": "posix", 00:19:44.154 "recv_buf_size": 2097152, 00:19:44.154 "send_buf_size": 2097152, 00:19:44.154 "enable_recv_pipe": true, 00:19:44.154 "enable_quickack": false, 00:19:44.154 "enable_placement_id": 0, 00:19:44.154 "enable_zerocopy_send_server": true, 00:19:44.154 "enable_zerocopy_send_client": false, 00:19:44.154 "zerocopy_threshold": 0, 00:19:44.154 "tls_version": 0, 00:19:44.154 "enable_ktls": false 00:19:44.154 } 00:19:44.154 }, 00:19:44.154 { 00:19:44.154 "method": "sock_impl_set_options", 00:19:44.154 "params": { 00:19:44.154 "impl_name": "ssl", 00:19:44.154 "recv_buf_size": 4096, 00:19:44.154 "send_buf_size": 4096, 00:19:44.154 "enable_recv_pipe": true, 00:19:44.154 "enable_quickack": false, 00:19:44.154 "enable_placement_id": 0, 00:19:44.154 "enable_zerocopy_send_server": true, 00:19:44.154 "enable_zerocopy_send_client": false, 00:19:44.154 "zerocopy_threshold": 0, 00:19:44.154 "tls_version": 0, 00:19:44.154 "enable_ktls": false 00:19:44.154 } 00:19:44.154 } 00:19:44.154 ] 00:19:44.154 }, 00:19:44.154 { 00:19:44.154 "subsystem": "vmd", 00:19:44.154 "config": [] 00:19:44.154 }, 00:19:44.154 { 00:19:44.154 "subsystem": "accel", 00:19:44.154 "config": [ 00:19:44.154 { 00:19:44.154 "method": "accel_set_options", 00:19:44.154 "params": { 00:19:44.154 "small_cache_size": 128, 00:19:44.154 "large_cache_size": 16, 00:19:44.154 "task_count": 2048, 00:19:44.154 "sequence_count": 2048, 00:19:44.154 "buf_count": 2048 00:19:44.154 } 00:19:44.154 } 00:19:44.154 ] 00:19:44.154 }, 00:19:44.154 { 00:19:44.154 "subsystem": "bdev", 00:19:44.154 "config": [ 00:19:44.154 { 00:19:44.154 "method": "bdev_set_options", 00:19:44.154 "params": { 00:19:44.154 "bdev_io_pool_size": 65535, 00:19:44.154 "bdev_io_cache_size": 256, 00:19:44.154 "bdev_auto_examine": true, 00:19:44.154 "iobuf_small_cache_size": 128, 00:19:44.154 "iobuf_large_cache_size": 16 00:19:44.154 } 00:19:44.154 }, 00:19:44.154 { 00:19:44.154 "method": "bdev_raid_set_options", 00:19:44.154 "params": { 00:19:44.154 "process_window_size_kb": 1024 00:19:44.154 } 00:19:44.154 }, 00:19:44.154 { 00:19:44.154 "method": "bdev_iscsi_set_options", 00:19:44.154 "params": { 00:19:44.154 "timeout_sec": 30 00:19:44.154 } 00:19:44.154 }, 00:19:44.154 { 00:19:44.154 "method": "bdev_nvme_set_options", 00:19:44.154 "params": { 00:19:44.154 "action_on_timeout": "none", 00:19:44.154 "timeout_us": 0, 00:19:44.154 "timeout_admin_us": 0, 00:19:44.154 "keep_alive_timeout_ms": 10000, 00:19:44.154 "arbitration_burst": 0, 00:19:44.154 "low_priority_weight": 0, 00:19:44.154 "medium_priority_weight": 0, 00:19:44.154 "high_priority_weight": 0, 00:19:44.154 "nvme_adminq_poll_period_us": 10000, 00:19:44.154 "nvme_ioq_poll_period_us": 0, 00:19:44.154 "io_queue_requests": 0, 00:19:44.154 "delay_cmd_submit": true, 00:19:44.154 "transport_retry_count": 4, 00:19:44.154 "bdev_retry_count": 3, 00:19:44.154 "transport_ack_timeout": 0, 00:19:44.154 "ctrlr_loss_timeout_sec": 0, 00:19:44.154 "reconnect_delay_sec": 0, 00:19:44.154 "fast_io_fail_timeout_sec": 0, 00:19:44.154 "disable_auto_failback": false, 00:19:44.154 "generate_uuids": false, 00:19:44.154 "transport_tos": 0, 00:19:44.154 "nvme_error_stat": false, 00:19:44.154 "rdma_srq_size": 0, 00:19:44.154 "io_path_stat": false, 00:19:44.154 "allow_accel_sequence": false, 00:19:44.154 "rdma_max_cq_size": 0, 00:19:44.154 "rdma_cm_event_timeout_ms": 0, 00:19:44.154 "dhchap_digests": [ 00:19:44.154 "sha256", 00:19:44.154 "sha384", 00:19:44.154 "sha512" 00:19:44.154 ], 00:19:44.154 "dhchap_dhgroups": [ 00:19:44.154 "null", 00:19:44.154 "ffdhe2048", 00:19:44.154 "ffdhe3072", 00:19:44.154 "ffdhe4096", 00:19:44.154 "ffdhe6144", 00:19:44.154 "ffdhe8192" 00:19:44.154 ] 00:19:44.154 } 00:19:44.154 }, 00:19:44.154 { 00:19:44.154 "method": "bdev_nvme_set_hotplug", 00:19:44.154 "params": { 00:19:44.154 "period_us": 100000, 00:19:44.154 "enable": false 00:19:44.154 } 00:19:44.154 }, 00:19:44.154 { 00:19:44.154 "method": "bdev_malloc_create", 00:19:44.154 "params": { 00:19:44.154 "name": "malloc0", 00:19:44.154 "num_blocks": 8192, 00:19:44.154 "block_size": 4096, 00:19:44.154 "physical_block_size": 4096, 00:19:44.154 "uuid": "b3355182-7c4c-4eac-ad14-77e2be85f501", 00:19:44.154 "optimal_io_boundary": 0 00:19:44.154 } 00:19:44.154 }, 00:19:44.154 { 00:19:44.154 "method": "bdev_wait_for_examine" 00:19:44.154 } 00:19:44.154 ] 00:19:44.154 }, 00:19:44.154 { 00:19:44.154 "subsystem": "nbd", 00:19:44.154 "config": [] 00:19:44.155 }, 00:19:44.155 { 00:19:44.155 "subsystem": "scheduler", 00:19:44.155 "config": [ 00:19:44.155 { 00:19:44.155 "method": "framework_set_scheduler", 00:19:44.155 "params": { 00:19:44.155 "name": "static" 00:19:44.155 } 00:19:44.155 } 00:19:44.155 ] 00:19:44.155 }, 00:19:44.155 { 00:19:44.155 "subsystem": "nvmf", 00:19:44.155 "config": [ 00:19:44.155 { 00:19:44.155 "method": "nvmf_set_config", 00:19:44.155 "params": { 00:19:44.155 "discovery_filter": "match_any", 00:19:44.155 "admin_cmd_passthru": { 00:19:44.155 "identify_ctrlr": false 00:19:44.155 } 00:19:44.155 } 00:19:44.155 }, 00:19:44.155 { 00:19:44.155 "method": "nvmf_set_max_subsystems", 00:19:44.155 "params": { 00:19:44.155 "max_subsystems": 1024 00:19:44.155 } 00:19:44.155 }, 00:19:44.155 { 00:19:44.155 "method": "nvmf_set_crdt", 00:19:44.155 "params": { 00:19:44.155 "crdt1": 0, 00:19:44.155 "crdt2": 0, 00:19:44.155 "crdt3": 0 00:19:44.155 } 00:19:44.155 }, 00:19:44.155 { 00:19:44.155 "method": "nvmf_create_transport", 00:19:44.155 "params": { 00:19:44.155 "trtype": "TCP", 00:19:44.155 "max_queue_depth": 128, 00:19:44.155 "max_io_qpairs_per_ctrlr": 127, 00:19:44.155 "in_capsule_data_size": 4096, 00:19:44.155 "max_io_size": 131072, 00:19:44.155 "io_unit_size": 131072, 00:19:44.155 "max_aq_depth": 128, 00:19:44.155 "num_shared_buffers": 511, 00:19:44.155 "buf_cache_size": 4294967295, 00:19:44.155 "dif_insert_or_strip": false, 00:19:44.155 "zcopy": false, 00:19:44.155 "c2h_success": false, 00:19:44.155 "sock_priority": 0, 00:19:44.155 "abort_timeout_sec": 1, 00:19:44.155 "ack_timeout": 0, 00:19:44.155 "data_wr_pool_size": 0 00:19:44.155 } 00:19:44.155 }, 00:19:44.155 { 00:19:44.155 "method": "nvmf_create_subsystem", 00:19:44.155 "params": { 00:19:44.155 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.155 "allow_any_host": false, 00:19:44.155 "serial_number": "00000000000000000000", 00:19:44.155 "model_number": "SPDK bdev Controller", 00:19:44.155 "max_namespaces": 32, 00:19:44.155 "min_cntlid": 1, 00:19:44.155 "max_cntlid": 65519, 00:19:44.155 "ana_reporting": false 00:19:44.155 } 00:19:44.155 }, 00:19:44.155 { 00:19:44.155 "method": "nvmf_subsystem_add_host", 00:19:44.155 "params": { 00:19:44.155 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.155 "host": "nqn.2016-06.io.spdk:host1", 00:19:44.155 "psk": "key0" 00:19:44.155 } 00:19:44.155 }, 00:19:44.155 { 00:19:44.155 "method": "nvmf_subsystem_add_ns", 00:19:44.155 "params": { 00:19:44.155 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.155 "namespace": { 00:19:44.155 "nsid": 1, 00:19:44.155 "bdev_name": "malloc0", 00:19:44.155 "nguid": "B33551827C4C4EACAD1477E2BE85F501", 00:19:44.155 "uuid": "b3355182-7c4c-4eac-ad14-77e2be85f501", 00:19:44.155 "no_auto_visible": false 00:19:44.155 } 00:19:44.155 } 00:19:44.155 }, 00:19:44.155 { 00:19:44.155 "method": "nvmf_subsystem_add_listener", 00:19:44.155 "params": { 00:19:44.155 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.155 "listen_address": { 00:19:44.155 "trtype": "TCP", 00:19:44.155 "adrfam": "IPv4", 00:19:44.155 "traddr": "10.0.0.2", 00:19:44.155 "trsvcid": "4420" 00:19:44.155 }, 00:19:44.155 "secure_channel": true 00:19:44.155 } 00:19:44.155 } 00:19:44.155 ] 00:19:44.155 } 00:19:44.155 ] 00:19:44.155 }' 00:19:44.155 15:00:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:44.155 15:00:29 -- common/autotest_common.sh@10 -- # set +x 00:19:44.155 15:00:29 -- nvmf/common.sh@470 -- # nvmfpid=3800433 00:19:44.155 15:00:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:44.155 15:00:29 -- nvmf/common.sh@471 -- # waitforlisten 3800433 00:19:44.155 15:00:29 -- common/autotest_common.sh@817 -- # '[' -z 3800433 ']' 00:19:44.155 15:00:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.155 15:00:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:44.155 15:00:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.155 15:00:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:44.155 15:00:29 -- common/autotest_common.sh@10 -- # set +x 00:19:44.155 [2024-04-26 15:00:29.792195] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:44.155 [2024-04-26 15:00:29.792264] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.155 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.155 [2024-04-26 15:00:29.827914] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:44.155 [2024-04-26 15:00:29.859445] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.444 [2024-04-26 15:00:29.950391] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.444 [2024-04-26 15:00:29.950455] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.444 [2024-04-26 15:00:29.950478] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.444 [2024-04-26 15:00:29.950491] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.444 [2024-04-26 15:00:29.950501] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.444 [2024-04-26 15:00:29.950579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.703 [2024-04-26 15:00:30.175654] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.703 [2024-04-26 15:00:30.207664] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:44.703 [2024-04-26 15:00:30.220260] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.269 15:00:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:45.269 15:00:30 -- common/autotest_common.sh@850 -- # return 0 00:19:45.269 15:00:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:45.269 15:00:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:45.269 15:00:30 -- common/autotest_common.sh@10 -- # set +x 00:19:45.269 15:00:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.269 15:00:30 -- target/tls.sh@272 -- # bdevperf_pid=3800585 00:19:45.269 15:00:30 -- target/tls.sh@273 -- # waitforlisten 3800585 /var/tmp/bdevperf.sock 00:19:45.269 15:00:30 -- common/autotest_common.sh@817 -- # '[' -z 3800585 ']' 00:19:45.270 15:00:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.270 15:00:30 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:45.270 15:00:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:45.270 15:00:30 -- target/tls.sh@270 -- # echo '{ 00:19:45.270 "subsystems": [ 00:19:45.270 { 00:19:45.270 "subsystem": "keyring", 00:19:45.270 "config": [ 00:19:45.270 { 00:19:45.270 "method": "keyring_file_add_key", 00:19:45.270 "params": { 00:19:45.270 "name": "key0", 00:19:45.270 "path": "/tmp/tmp.0KKFjE8252" 00:19:45.270 } 00:19:45.270 } 00:19:45.270 ] 00:19:45.270 }, 00:19:45.270 { 00:19:45.270 "subsystem": "iobuf", 00:19:45.270 "config": [ 00:19:45.270 { 00:19:45.270 "method": "iobuf_set_options", 00:19:45.270 "params": { 00:19:45.270 "small_pool_count": 8192, 00:19:45.270 "large_pool_count": 1024, 00:19:45.270 "small_bufsize": 8192, 00:19:45.270 "large_bufsize": 135168 00:19:45.270 } 00:19:45.270 } 00:19:45.270 ] 00:19:45.270 }, 00:19:45.270 { 00:19:45.270 "subsystem": "sock", 00:19:45.270 "config": [ 00:19:45.270 { 00:19:45.270 "method": "sock_impl_set_options", 00:19:45.270 "params": { 00:19:45.270 "impl_name": "posix", 00:19:45.270 "recv_buf_size": 2097152, 00:19:45.270 "send_buf_size": 2097152, 00:19:45.270 "enable_recv_pipe": true, 00:19:45.270 "enable_quickack": false, 00:19:45.270 "enable_placement_id": 0, 00:19:45.270 "enable_zerocopy_send_server": true, 00:19:45.270 "enable_zerocopy_send_client": false, 00:19:45.270 "zerocopy_threshold": 0, 00:19:45.270 "tls_version": 0, 00:19:45.270 "enable_ktls": false 00:19:45.270 } 00:19:45.270 }, 00:19:45.270 { 00:19:45.270 "method": "sock_impl_set_options", 00:19:45.270 "params": { 00:19:45.270 "impl_name": "ssl", 00:19:45.270 "recv_buf_size": 4096, 00:19:45.270 "send_buf_size": 4096, 00:19:45.270 "enable_recv_pipe": true, 00:19:45.270 "enable_quickack": false, 00:19:45.270 "enable_placement_id": 0, 00:19:45.270 "enable_zerocopy_send_server": true, 00:19:45.270 "enable_zerocopy_send_client": false, 00:19:45.270 "zerocopy_threshold": 0, 00:19:45.270 "tls_version": 0, 00:19:45.270 "enable_ktls": false 00:19:45.270 } 00:19:45.270 } 00:19:45.270 ] 00:19:45.270 }, 00:19:45.270 { 00:19:45.270 "subsystem": "vmd", 00:19:45.270 "config": [] 00:19:45.270 }, 00:19:45.270 { 00:19:45.270 "subsystem": "accel", 00:19:45.270 "config": [ 00:19:45.270 { 00:19:45.270 "method": "accel_set_options", 00:19:45.270 "params": { 00:19:45.270 "small_cache_size": 128, 00:19:45.270 "large_cache_size": 16, 00:19:45.270 "task_count": 2048, 00:19:45.270 "sequence_count": 2048, 00:19:45.270 "buf_count": 2048 00:19:45.270 } 00:19:45.270 } 00:19:45.270 ] 00:19:45.270 }, 00:19:45.270 { 00:19:45.270 "subsystem": "bdev", 00:19:45.270 "config": [ 00:19:45.270 { 00:19:45.270 "method": "bdev_set_options", 00:19:45.270 "params": { 00:19:45.270 "bdev_io_pool_size": 65535, 00:19:45.270 "bdev_io_cache_size": 256, 00:19:45.270 "bdev_auto_examine": true, 00:19:45.270 "iobuf_small_cache_size": 128, 00:19:45.270 "iobuf_large_cache_size": 16 00:19:45.270 } 00:19:45.270 }, 00:19:45.270 { 00:19:45.270 "method": "bdev_raid_set_options", 00:19:45.270 "params": { 00:19:45.270 "process_window_size_kb": 1024 00:19:45.270 } 00:19:45.270 }, 00:19:45.270 { 00:19:45.270 "method": "bdev_iscsi_set_options", 00:19:45.270 "params": { 00:19:45.270 "timeout_sec": 30 00:19:45.270 } 00:19:45.270 }, 00:19:45.270 { 00:19:45.270 "method": "bdev_nvme_set_options", 00:19:45.270 "params": { 00:19:45.270 "action_on_timeout": "none", 00:19:45.270 "timeout_us": 0, 00:19:45.270 "timeout_admin_us": 0, 00:19:45.270 "keep_alive_timeout_ms": 10000, 00:19:45.270 "arbitration_burst": 0, 00:19:45.270 "low_priority_weight": 0, 00:19:45.270 "medium_priority_weight": 0, 00:19:45.270 "high_priority_weight": 0, 00:19:45.270 "nvme_adminq_poll_period_us": 10000, 00:19:45.270 "nvme_ioq_poll_period_us": 0, 00:19:45.270 "io_queue_requests": 512, 00:19:45.270 "delay_cmd_submit": true, 00:19:45.270 "transport_retry_count": 4, 00:19:45.270 "bdev_retry_count": 3, 00:19:45.270 "transport_ack_timeout": 0, 00:19:45.270 "ctrlr_loss_timeout_sec": 0, 00:19:45.270 "reconnect_delay_sec": 0, 00:19:45.270 "fast_io_fail_timeout_sec": 0, 00:19:45.270 "disable_auto_failback": false, 00:19:45.270 "generate_uuids": false, 00:19:45.270 "transport_tos": 0, 00:19:45.270 "nvme_error_stat": false, 00:19:45.270 "rdma_srq_size": 0, 00:19:45.270 "io_path_stat": false, 00:19:45.270 "allow_accel_sequence": false, 00:19:45.270 "rdma_max_cq_size": 0, 00:19:45.270 "rdma_cm_event_timeout_ms": 0, 00:19:45.270 "dhchap_digests": [ 00:19:45.270 "sha256", 00:19:45.270 "sha384", 00:19:45.270 "sha512" 00:19:45.270 ], 00:19:45.270 "dhchap_dhgroups": [ 00:19:45.270 "null", 00:19:45.270 "ffdhe2048", 00:19:45.270 "ffdhe3072", 00:19:45.270 "ffdhe4096", 00:19:45.270 "ffdhe6144", 00:19:45.270 "ffdhe8192" 00:19:45.270 ] 00:19:45.270 } 00:19:45.270 }, 00:19:45.270 { 00:19:45.270 "method": "bdev_nvme_attach_controller", 00:19:45.270 "params": { 00:19:45.270 "name": "nvme0", 00:19:45.270 "trtype": "TCP", 00:19:45.270 "adrfam": "IPv4", 00:19:45.270 "traddr": "10.0.0.2", 00:19:45.270 "trsvcid": "4420", 00:19:45.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.270 "prchk_reftag": false, 00:19:45.270 "prchk_guard": false, 00:19:45.270 "ctrlr_loss_timeout_sec": 0, 00:19:45.270 "reconnect_delay_sec": 0, 00:19:45.270 "fast_io_fail_timeout_sec": 0, 00:19:45.270 "psk": "key0", 00:19:45.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:45.270 "hdgst": false, 00:19:45.270 "ddgst": false 00:19:45.270 } 00:19:45.270 }, 00:19:45.270 { 00:19:45.270 "method": "bdev_nvme_set_hotplug", 00:19:45.270 "params": { 00:19:45.270 "period_us": 100000, 00:19:45.270 "enable": false 00:19:45.270 } 00:19:45.270 }, 00:19:45.270 { 00:19:45.270 "method": "bdev_enable_histogram", 00:19:45.270 "params": { 00:19:45.270 "name": "nvme0n1", 00:19:45.270 "enable": true 00:19:45.270 } 00:19:45.270 }, 00:19:45.270 { 00:19:45.270 "method": "bdev_wait_for_examine" 00:19:45.270 } 00:19:45.270 ] 00:19:45.270 }, 00:19:45.270 { 00:19:45.270 "subsystem": "nbd", 00:19:45.270 "config": [] 00:19:45.270 } 00:19:45.270 ] 00:19:45.270 }' 00:19:45.270 15:00:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.270 15:00:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:45.270 15:00:30 -- common/autotest_common.sh@10 -- # set +x 00:19:45.270 [2024-04-26 15:00:30.848630] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:45.270 [2024-04-26 15:00:30.848712] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3800585 ] 00:19:45.270 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.270 [2024-04-26 15:00:30.880584] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:45.270 [2024-04-26 15:00:30.912663] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.270 [2024-04-26 15:00:31.001314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.530 [2024-04-26 15:00:31.176507] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:46.098 15:00:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:46.098 15:00:31 -- common/autotest_common.sh@850 -- # return 0 00:19:46.098 15:00:31 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:46.098 15:00:31 -- target/tls.sh@275 -- # jq -r '.[].name' 00:19:46.355 15:00:32 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.355 15:00:32 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:46.613 Running I/O for 1 seconds... 00:19:47.548 00:19:47.548 Latency(us) 00:19:47.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.548 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:47.548 Verification LBA range: start 0x0 length 0x2000 00:19:47.548 nvme0n1 : 1.02 2993.77 11.69 0.00 0.00 42257.26 6893.42 78837.38 00:19:47.548 =================================================================================================================== 00:19:47.548 Total : 2993.77 11.69 0.00 0.00 42257.26 6893.42 78837.38 00:19:47.548 0 00:19:47.548 15:00:33 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:19:47.548 15:00:33 -- target/tls.sh@279 -- # cleanup 00:19:47.548 15:00:33 -- target/tls.sh@15 -- # process_shm --id 0 00:19:47.548 15:00:33 -- common/autotest_common.sh@794 -- # type=--id 00:19:47.548 15:00:33 -- common/autotest_common.sh@795 -- # id=0 00:19:47.548 15:00:33 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:19:47.548 15:00:33 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:47.548 15:00:33 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:19:47.548 15:00:33 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:19:47.548 15:00:33 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:19:47.548 15:00:33 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:47.548 nvmf_trace.0 00:19:47.548 15:00:33 -- common/autotest_common.sh@809 -- # return 0 00:19:47.548 15:00:33 -- target/tls.sh@16 -- # killprocess 3800585 00:19:47.548 15:00:33 -- common/autotest_common.sh@936 -- # '[' -z 3800585 ']' 00:19:47.548 15:00:33 -- common/autotest_common.sh@940 -- # kill -0 3800585 00:19:47.548 15:00:33 -- common/autotest_common.sh@941 -- # uname 00:19:47.548 15:00:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:47.548 15:00:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3800585 00:19:47.805 15:00:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:47.805 15:00:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:47.805 15:00:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3800585' 00:19:47.805 killing process with pid 3800585 00:19:47.805 15:00:33 -- common/autotest_common.sh@955 -- # kill 3800585 00:19:47.805 Received shutdown signal, test time was about 1.000000 seconds 00:19:47.805 00:19:47.805 Latency(us) 00:19:47.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.805 =================================================================================================================== 00:19:47.805 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:47.805 15:00:33 -- common/autotest_common.sh@960 -- # wait 3800585 00:19:47.805 15:00:33 -- target/tls.sh@17 -- # nvmftestfini 00:19:47.805 15:00:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:47.805 15:00:33 -- nvmf/common.sh@117 -- # sync 00:19:47.805 15:00:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:47.805 15:00:33 -- nvmf/common.sh@120 -- # set +e 00:19:47.806 15:00:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:47.806 15:00:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:47.806 rmmod nvme_tcp 00:19:48.063 rmmod nvme_fabrics 00:19:48.063 rmmod nvme_keyring 00:19:48.063 15:00:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:48.063 15:00:33 -- nvmf/common.sh@124 -- # set -e 00:19:48.063 15:00:33 -- nvmf/common.sh@125 -- # return 0 00:19:48.063 15:00:33 -- nvmf/common.sh@478 -- # '[' -n 3800433 ']' 00:19:48.063 15:00:33 -- nvmf/common.sh@479 -- # killprocess 3800433 00:19:48.063 15:00:33 -- common/autotest_common.sh@936 -- # '[' -z 3800433 ']' 00:19:48.063 15:00:33 -- common/autotest_common.sh@940 -- # kill -0 3800433 00:19:48.063 15:00:33 -- common/autotest_common.sh@941 -- # uname 00:19:48.063 15:00:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:48.063 15:00:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3800433 00:19:48.063 15:00:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:48.063 15:00:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:48.063 15:00:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3800433' 00:19:48.063 killing process with pid 3800433 00:19:48.063 15:00:33 -- common/autotest_common.sh@955 -- # kill 3800433 00:19:48.063 15:00:33 -- common/autotest_common.sh@960 -- # wait 3800433 00:19:48.322 15:00:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:48.322 15:00:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:48.322 15:00:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:48.322 15:00:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:48.322 15:00:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:48.322 15:00:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.322 15:00:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.322 15:00:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.225 15:00:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:50.225 15:00:35 -- target/tls.sh@18 -- # rm -f /tmp/tmp.FT9gS7BfMH /tmp/tmp.tcl6peEyfL /tmp/tmp.0KKFjE8252 00:19:50.225 00:19:50.225 real 1m18.512s 00:19:50.225 user 2m5.065s 00:19:50.225 sys 0m28.115s 00:19:50.225 15:00:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:50.225 15:00:35 -- common/autotest_common.sh@10 -- # set +x 00:19:50.225 ************************************ 00:19:50.225 END TEST nvmf_tls 00:19:50.225 ************************************ 00:19:50.225 15:00:35 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:50.225 15:00:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:50.225 15:00:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:50.225 15:00:35 -- common/autotest_common.sh@10 -- # set +x 00:19:50.482 ************************************ 00:19:50.482 START TEST nvmf_fips 00:19:50.482 ************************************ 00:19:50.482 15:00:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:50.482 * Looking for test storage... 00:19:50.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:50.482 15:00:36 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:50.482 15:00:36 -- nvmf/common.sh@7 -- # uname -s 00:19:50.482 15:00:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.482 15:00:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.482 15:00:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.482 15:00:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.482 15:00:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.482 15:00:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.482 15:00:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.482 15:00:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.482 15:00:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.482 15:00:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.482 15:00:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:50.482 15:00:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:50.482 15:00:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.482 15:00:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.482 15:00:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:50.482 15:00:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.482 15:00:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:50.482 15:00:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.482 15:00:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.482 15:00:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.482 15:00:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.482 15:00:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.482 15:00:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.482 15:00:36 -- paths/export.sh@5 -- # export PATH 00:19:50.482 15:00:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.482 15:00:36 -- nvmf/common.sh@47 -- # : 0 00:19:50.482 15:00:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:50.482 15:00:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:50.482 15:00:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.482 15:00:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.482 15:00:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.482 15:00:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:50.482 15:00:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:50.482 15:00:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:50.482 15:00:36 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:50.482 15:00:36 -- fips/fips.sh@89 -- # check_openssl_version 00:19:50.482 15:00:36 -- fips/fips.sh@83 -- # local target=3.0.0 00:19:50.482 15:00:36 -- fips/fips.sh@85 -- # openssl version 00:19:50.482 15:00:36 -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:50.482 15:00:36 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:50.482 15:00:36 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:50.482 15:00:36 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:50.482 15:00:36 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:50.482 15:00:36 -- scripts/common.sh@333 -- # IFS=.-: 00:19:50.482 15:00:36 -- scripts/common.sh@333 -- # read -ra ver1 00:19:50.482 15:00:36 -- scripts/common.sh@334 -- # IFS=.-: 00:19:50.482 15:00:36 -- scripts/common.sh@334 -- # read -ra ver2 00:19:50.482 15:00:36 -- scripts/common.sh@335 -- # local 'op=>=' 00:19:50.482 15:00:36 -- scripts/common.sh@337 -- # ver1_l=3 00:19:50.482 15:00:36 -- scripts/common.sh@338 -- # ver2_l=3 00:19:50.482 15:00:36 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:50.482 15:00:36 -- scripts/common.sh@341 -- # case "$op" in 00:19:50.482 15:00:36 -- scripts/common.sh@345 -- # : 1 00:19:50.482 15:00:36 -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:50.482 15:00:36 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.482 15:00:36 -- scripts/common.sh@362 -- # decimal 3 00:19:50.482 15:00:36 -- scripts/common.sh@350 -- # local d=3 00:19:50.482 15:00:36 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:50.482 15:00:36 -- scripts/common.sh@352 -- # echo 3 00:19:50.482 15:00:36 -- scripts/common.sh@362 -- # ver1[v]=3 00:19:50.482 15:00:36 -- scripts/common.sh@363 -- # decimal 3 00:19:50.482 15:00:36 -- scripts/common.sh@350 -- # local d=3 00:19:50.482 15:00:36 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:50.482 15:00:36 -- scripts/common.sh@352 -- # echo 3 00:19:50.482 15:00:36 -- scripts/common.sh@363 -- # ver2[v]=3 00:19:50.482 15:00:36 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:50.482 15:00:36 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:50.482 15:00:36 -- scripts/common.sh@361 -- # (( v++ )) 00:19:50.482 15:00:36 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.482 15:00:36 -- scripts/common.sh@362 -- # decimal 0 00:19:50.482 15:00:36 -- scripts/common.sh@350 -- # local d=0 00:19:50.482 15:00:36 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:50.482 15:00:36 -- scripts/common.sh@352 -- # echo 0 00:19:50.482 15:00:36 -- scripts/common.sh@362 -- # ver1[v]=0 00:19:50.482 15:00:36 -- scripts/common.sh@363 -- # decimal 0 00:19:50.482 15:00:36 -- scripts/common.sh@350 -- # local d=0 00:19:50.482 15:00:36 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:50.482 15:00:36 -- scripts/common.sh@352 -- # echo 0 00:19:50.482 15:00:36 -- scripts/common.sh@363 -- # ver2[v]=0 00:19:50.482 15:00:36 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:50.482 15:00:36 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:50.482 15:00:36 -- scripts/common.sh@361 -- # (( v++ )) 00:19:50.482 15:00:36 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.482 15:00:36 -- scripts/common.sh@362 -- # decimal 9 00:19:50.482 15:00:36 -- scripts/common.sh@350 -- # local d=9 00:19:50.482 15:00:36 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:50.482 15:00:36 -- scripts/common.sh@352 -- # echo 9 00:19:50.482 15:00:36 -- scripts/common.sh@362 -- # ver1[v]=9 00:19:50.482 15:00:36 -- scripts/common.sh@363 -- # decimal 0 00:19:50.482 15:00:36 -- scripts/common.sh@350 -- # local d=0 00:19:50.482 15:00:36 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:50.482 15:00:36 -- scripts/common.sh@352 -- # echo 0 00:19:50.482 15:00:36 -- scripts/common.sh@363 -- # ver2[v]=0 00:19:50.482 15:00:36 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:50.483 15:00:36 -- scripts/common.sh@364 -- # return 0 00:19:50.483 15:00:36 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:50.483 15:00:36 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:50.483 15:00:36 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:50.483 15:00:36 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:50.483 15:00:36 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:50.483 15:00:36 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:50.483 15:00:36 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:50.483 15:00:36 -- fips/fips.sh@113 -- # build_openssl_config 00:19:50.483 15:00:36 -- fips/fips.sh@37 -- # cat 00:19:50.483 15:00:36 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:50.483 15:00:36 -- fips/fips.sh@58 -- # cat - 00:19:50.483 15:00:36 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:50.483 15:00:36 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:50.483 15:00:36 -- fips/fips.sh@116 -- # mapfile -t providers 00:19:50.483 15:00:36 -- fips/fips.sh@116 -- # openssl list -providers 00:19:50.483 15:00:36 -- fips/fips.sh@116 -- # grep name 00:19:50.483 15:00:36 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:50.483 15:00:36 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:50.483 15:00:36 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:50.483 15:00:36 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:50.483 15:00:36 -- fips/fips.sh@127 -- # : 00:19:50.483 15:00:36 -- common/autotest_common.sh@638 -- # local es=0 00:19:50.483 15:00:36 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:50.483 15:00:36 -- common/autotest_common.sh@626 -- # local arg=openssl 00:19:50.483 15:00:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:50.483 15:00:36 -- common/autotest_common.sh@630 -- # type -t openssl 00:19:50.483 15:00:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:50.483 15:00:36 -- common/autotest_common.sh@632 -- # type -P openssl 00:19:50.483 15:00:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:50.483 15:00:36 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:19:50.483 15:00:36 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:19:50.483 15:00:36 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:19:50.741 Error setting digest 00:19:50.741 00922F0F0D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:50.741 00922F0F0D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:50.741 15:00:36 -- common/autotest_common.sh@641 -- # es=1 00:19:50.741 15:00:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:50.741 15:00:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:50.741 15:00:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:50.741 15:00:36 -- fips/fips.sh@130 -- # nvmftestinit 00:19:50.741 15:00:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:50.741 15:00:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.741 15:00:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:50.741 15:00:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:50.741 15:00:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:50.741 15:00:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.741 15:00:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.741 15:00:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.741 15:00:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:50.741 15:00:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:50.741 15:00:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:50.741 15:00:36 -- common/autotest_common.sh@10 -- # set +x 00:19:52.637 15:00:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:52.637 15:00:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:52.637 15:00:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:52.637 15:00:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:52.637 15:00:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:52.637 15:00:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:52.637 15:00:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:52.637 15:00:38 -- nvmf/common.sh@295 -- # net_devs=() 00:19:52.637 15:00:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:52.637 15:00:38 -- nvmf/common.sh@296 -- # e810=() 00:19:52.637 15:00:38 -- nvmf/common.sh@296 -- # local -ga e810 00:19:52.637 15:00:38 -- nvmf/common.sh@297 -- # x722=() 00:19:52.637 15:00:38 -- nvmf/common.sh@297 -- # local -ga x722 00:19:52.637 15:00:38 -- nvmf/common.sh@298 -- # mlx=() 00:19:52.637 15:00:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:52.637 15:00:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.637 15:00:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.637 15:00:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.637 15:00:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.637 15:00:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.637 15:00:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.637 15:00:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.637 15:00:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.637 15:00:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.637 15:00:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.637 15:00:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.637 15:00:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:52.637 15:00:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:52.637 15:00:38 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:52.637 15:00:38 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:52.637 15:00:38 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:52.637 15:00:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:52.637 15:00:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.637 15:00:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:52.637 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:52.637 15:00:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.637 15:00:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.637 15:00:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.637 15:00:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.637 15:00:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.637 15:00:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.637 15:00:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:52.637 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:52.637 15:00:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.637 15:00:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.637 15:00:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.637 15:00:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.637 15:00:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.637 15:00:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:52.637 15:00:38 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:52.637 15:00:38 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:52.637 15:00:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.637 15:00:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.637 15:00:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:52.637 15:00:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.637 15:00:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:52.637 Found net devices under 0000:84:00.0: cvl_0_0 00:19:52.637 15:00:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.637 15:00:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.637 15:00:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.637 15:00:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:52.637 15:00:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.637 15:00:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:52.637 Found net devices under 0000:84:00.1: cvl_0_1 00:19:52.637 15:00:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.637 15:00:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:52.637 15:00:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:52.637 15:00:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:52.637 15:00:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:52.637 15:00:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:52.637 15:00:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.637 15:00:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.637 15:00:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.637 15:00:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:52.637 15:00:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:52.637 15:00:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:52.637 15:00:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:52.637 15:00:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:52.637 15:00:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.637 15:00:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:52.637 15:00:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:52.637 15:00:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:52.637 15:00:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:52.895 15:00:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:52.895 15:00:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:52.895 15:00:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:52.895 15:00:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:52.895 15:00:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:52.895 15:00:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:52.895 15:00:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:52.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:19:52.895 00:19:52.895 --- 10.0.0.2 ping statistics --- 00:19:52.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.895 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:19:52.895 15:00:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:52.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:19:52.895 00:19:52.895 --- 10.0.0.1 ping statistics --- 00:19:52.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.895 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:19:52.895 15:00:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.895 15:00:38 -- nvmf/common.sh@411 -- # return 0 00:19:52.895 15:00:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:52.895 15:00:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.895 15:00:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:52.895 15:00:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:52.895 15:00:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.895 15:00:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:52.895 15:00:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:52.895 15:00:38 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:52.895 15:00:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:52.895 15:00:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:52.895 15:00:38 -- common/autotest_common.sh@10 -- # set +x 00:19:52.895 15:00:38 -- nvmf/common.sh@470 -- # nvmfpid=3802894 00:19:52.895 15:00:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:52.895 15:00:38 -- nvmf/common.sh@471 -- # waitforlisten 3802894 00:19:52.895 15:00:38 -- common/autotest_common.sh@817 -- # '[' -z 3802894 ']' 00:19:52.895 15:00:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.895 15:00:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:52.895 15:00:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.895 15:00:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:52.895 15:00:38 -- common/autotest_common.sh@10 -- # set +x 00:19:52.895 [2024-04-26 15:00:38.544686] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:52.895 [2024-04-26 15:00:38.544775] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.895 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.895 [2024-04-26 15:00:38.582977] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:52.895 [2024-04-26 15:00:38.611227] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.153 [2024-04-26 15:00:38.695252] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.153 [2024-04-26 15:00:38.695307] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.153 [2024-04-26 15:00:38.695346] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.153 [2024-04-26 15:00:38.695358] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.153 [2024-04-26 15:00:38.695369] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.153 [2024-04-26 15:00:38.695428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.153 15:00:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:53.153 15:00:38 -- common/autotest_common.sh@850 -- # return 0 00:19:53.153 15:00:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:53.153 15:00:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:53.153 15:00:38 -- common/autotest_common.sh@10 -- # set +x 00:19:53.153 15:00:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.153 15:00:38 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:53.153 15:00:38 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:53.153 15:00:38 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:53.153 15:00:38 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:53.153 15:00:38 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:53.153 15:00:38 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:53.153 15:00:38 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:53.154 15:00:38 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:53.413 [2024-04-26 15:00:39.059438] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.414 [2024-04-26 15:00:39.075424] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:53.414 [2024-04-26 15:00:39.075709] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.414 [2024-04-26 15:00:39.107893] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:53.414 malloc0 00:19:53.414 15:00:39 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:53.414 15:00:39 -- fips/fips.sh@147 -- # bdevperf_pid=3802993 00:19:53.414 15:00:39 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:53.414 15:00:39 -- fips/fips.sh@148 -- # waitforlisten 3802993 /var/tmp/bdevperf.sock 00:19:53.414 15:00:39 -- common/autotest_common.sh@817 -- # '[' -z 3802993 ']' 00:19:53.414 15:00:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.414 15:00:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:53.414 15:00:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.414 15:00:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:53.414 15:00:39 -- common/autotest_common.sh@10 -- # set +x 00:19:53.670 [2024-04-26 15:00:39.197696] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:19:53.670 [2024-04-26 15:00:39.197787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3802993 ] 00:19:53.670 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.670 [2024-04-26 15:00:39.228875] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:53.670 [2024-04-26 15:00:39.255682] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.670 [2024-04-26 15:00:39.337391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.927 15:00:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:53.927 15:00:39 -- common/autotest_common.sh@850 -- # return 0 00:19:53.927 15:00:39 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:54.184 [2024-04-26 15:00:39.711098] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.184 [2024-04-26 15:00:39.711218] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:54.184 TLSTESTn1 00:19:54.184 15:00:39 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:54.184 Running I/O for 10 seconds... 00:20:06.380 00:20:06.380 Latency(us) 00:20:06.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.380 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:06.380 Verification LBA range: start 0x0 length 0x2000 00:20:06.380 TLSTESTn1 : 10.03 3299.73 12.89 0.00 0.00 38723.14 7427.41 64468.01 00:20:06.380 =================================================================================================================== 00:20:06.380 Total : 3299.73 12.89 0.00 0.00 38723.14 7427.41 64468.01 00:20:06.380 0 00:20:06.380 15:00:49 -- fips/fips.sh@1 -- # cleanup 00:20:06.380 15:00:49 -- fips/fips.sh@15 -- # process_shm --id 0 00:20:06.380 15:00:49 -- common/autotest_common.sh@794 -- # type=--id 00:20:06.380 15:00:49 -- common/autotest_common.sh@795 -- # id=0 00:20:06.380 15:00:49 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:20:06.380 15:00:49 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:06.380 15:00:49 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:20:06.380 15:00:49 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:20:06.380 15:00:49 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:20:06.380 15:00:49 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:06.380 nvmf_trace.0 00:20:06.380 15:00:50 -- common/autotest_common.sh@809 -- # return 0 00:20:06.380 15:00:50 -- fips/fips.sh@16 -- # killprocess 3802993 00:20:06.380 15:00:50 -- common/autotest_common.sh@936 -- # '[' -z 3802993 ']' 00:20:06.380 15:00:50 -- common/autotest_common.sh@940 -- # kill -0 3802993 00:20:06.380 15:00:50 -- common/autotest_common.sh@941 -- # uname 00:20:06.380 15:00:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:06.380 15:00:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3802993 00:20:06.380 15:00:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:06.380 15:00:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:06.380 15:00:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3802993' 00:20:06.380 killing process with pid 3802993 00:20:06.380 15:00:50 -- common/autotest_common.sh@955 -- # kill 3802993 00:20:06.380 Received shutdown signal, test time was about 10.000000 seconds 00:20:06.380 00:20:06.380 Latency(us) 00:20:06.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.380 =================================================================================================================== 00:20:06.380 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.380 [2024-04-26 15:00:50.044452] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:06.380 15:00:50 -- common/autotest_common.sh@960 -- # wait 3802993 00:20:06.380 15:00:50 -- fips/fips.sh@17 -- # nvmftestfini 00:20:06.380 15:00:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:06.380 15:00:50 -- nvmf/common.sh@117 -- # sync 00:20:06.380 15:00:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:06.380 15:00:50 -- nvmf/common.sh@120 -- # set +e 00:20:06.380 15:00:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:06.380 15:00:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:06.380 rmmod nvme_tcp 00:20:06.380 rmmod nvme_fabrics 00:20:06.380 rmmod nvme_keyring 00:20:06.380 15:00:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:06.380 15:00:50 -- nvmf/common.sh@124 -- # set -e 00:20:06.380 15:00:50 -- nvmf/common.sh@125 -- # return 0 00:20:06.380 15:00:50 -- nvmf/common.sh@478 -- # '[' -n 3802894 ']' 00:20:06.380 15:00:50 -- nvmf/common.sh@479 -- # killprocess 3802894 00:20:06.380 15:00:50 -- common/autotest_common.sh@936 -- # '[' -z 3802894 ']' 00:20:06.380 15:00:50 -- common/autotest_common.sh@940 -- # kill -0 3802894 00:20:06.380 15:00:50 -- common/autotest_common.sh@941 -- # uname 00:20:06.380 15:00:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:06.380 15:00:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3802894 00:20:06.380 15:00:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:06.380 15:00:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:06.380 15:00:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3802894' 00:20:06.380 killing process with pid 3802894 00:20:06.380 15:00:50 -- common/autotest_common.sh@955 -- # kill 3802894 00:20:06.381 [2024-04-26 15:00:50.368908] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:06.381 15:00:50 -- common/autotest_common.sh@960 -- # wait 3802894 00:20:06.381 15:00:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:06.381 15:00:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:06.381 15:00:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:06.381 15:00:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:06.381 15:00:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:06.381 15:00:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.381 15:00:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.381 15:00:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.947 15:00:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:06.947 15:00:52 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:06.947 00:20:06.947 real 0m16.602s 00:20:06.947 user 0m19.717s 00:20:06.947 sys 0m7.147s 00:20:06.947 15:00:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:06.947 15:00:52 -- common/autotest_common.sh@10 -- # set +x 00:20:06.947 ************************************ 00:20:06.947 END TEST nvmf_fips 00:20:06.947 ************************************ 00:20:06.947 15:00:52 -- nvmf/nvmf.sh@64 -- # '[' 1 -eq 1 ']' 00:20:06.947 15:00:52 -- nvmf/nvmf.sh@65 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:06.947 15:00:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:06.947 15:00:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:06.947 15:00:52 -- common/autotest_common.sh@10 -- # set +x 00:20:07.220 ************************************ 00:20:07.220 START TEST nvmf_fuzz 00:20:07.220 ************************************ 00:20:07.220 15:00:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:07.220 * Looking for test storage... 00:20:07.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:07.220 15:00:52 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.220 15:00:52 -- nvmf/common.sh@7 -- # uname -s 00:20:07.220 15:00:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.220 15:00:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.220 15:00:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.220 15:00:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.220 15:00:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.220 15:00:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.220 15:00:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.220 15:00:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.220 15:00:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.220 15:00:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.220 15:00:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:07.220 15:00:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:07.220 15:00:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.220 15:00:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.220 15:00:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.220 15:00:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.220 15:00:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.220 15:00:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.220 15:00:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.220 15:00:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.220 15:00:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.220 15:00:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.220 15:00:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.220 15:00:52 -- paths/export.sh@5 -- # export PATH 00:20:07.220 15:00:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.220 15:00:52 -- nvmf/common.sh@47 -- # : 0 00:20:07.220 15:00:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:07.220 15:00:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:07.220 15:00:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.220 15:00:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.220 15:00:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.220 15:00:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:07.220 15:00:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:07.220 15:00:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:07.220 15:00:52 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:07.220 15:00:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:07.220 15:00:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.220 15:00:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:07.220 15:00:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:07.220 15:00:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:07.220 15:00:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.220 15:00:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.220 15:00:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.220 15:00:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:07.220 15:00:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:07.220 15:00:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:07.220 15:00:52 -- common/autotest_common.sh@10 -- # set +x 00:20:09.752 15:00:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:09.752 15:00:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:09.752 15:00:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:09.752 15:00:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:09.752 15:00:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:09.752 15:00:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:09.752 15:00:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:09.752 15:00:54 -- nvmf/common.sh@295 -- # net_devs=() 00:20:09.752 15:00:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:09.752 15:00:54 -- nvmf/common.sh@296 -- # e810=() 00:20:09.752 15:00:54 -- nvmf/common.sh@296 -- # local -ga e810 00:20:09.752 15:00:54 -- nvmf/common.sh@297 -- # x722=() 00:20:09.752 15:00:54 -- nvmf/common.sh@297 -- # local -ga x722 00:20:09.752 15:00:54 -- nvmf/common.sh@298 -- # mlx=() 00:20:09.752 15:00:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:09.752 15:00:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:09.752 15:00:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:09.752 15:00:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:09.752 15:00:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:09.752 15:00:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:09.752 15:00:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:09.752 15:00:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:09.752 15:00:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:09.752 15:00:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:09.752 15:00:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:09.752 15:00:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:09.752 15:00:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:09.752 15:00:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:09.752 15:00:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:09.752 15:00:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:09.752 15:00:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:09.752 15:00:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:09.752 15:00:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:09.752 15:00:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:09.752 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:09.752 15:00:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:09.752 15:00:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:09.752 15:00:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.752 15:00:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.752 15:00:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:09.752 15:00:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:09.752 15:00:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:09.752 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:09.752 15:00:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:09.752 15:00:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:09.752 15:00:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.752 15:00:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.752 15:00:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:09.752 15:00:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:09.752 15:00:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:09.752 15:00:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:09.752 15:00:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:09.752 15:00:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.752 15:00:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:09.752 15:00:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.752 15:00:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:09.752 Found net devices under 0000:84:00.0: cvl_0_0 00:20:09.752 15:00:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.752 15:00:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:09.752 15:00:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.752 15:00:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:09.752 15:00:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.752 15:00:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:09.752 Found net devices under 0000:84:00.1: cvl_0_1 00:20:09.752 15:00:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.752 15:00:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:09.752 15:00:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:09.752 15:00:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:09.752 15:00:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:09.752 15:00:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:09.752 15:00:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:09.752 15:00:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.752 15:00:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:09.752 15:00:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:09.752 15:00:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:09.752 15:00:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:09.752 15:00:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:09.753 15:00:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:09.753 15:00:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.753 15:00:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:09.753 15:00:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:09.753 15:00:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:09.753 15:00:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:09.753 15:00:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:09.753 15:00:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:09.753 15:00:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:09.753 15:00:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:09.753 15:00:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:09.753 15:00:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:09.753 15:00:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:09.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:20:09.753 00:20:09.753 --- 10.0.0.2 ping statistics --- 00:20:09.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.753 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:20:09.753 15:00:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:09.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:20:09.753 00:20:09.753 --- 10.0.0.1 ping statistics --- 00:20:09.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.753 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:09.753 15:00:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.753 15:00:55 -- nvmf/common.sh@411 -- # return 0 00:20:09.753 15:00:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:09.753 15:00:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.753 15:00:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:09.753 15:00:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:09.753 15:00:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.753 15:00:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:09.753 15:00:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:09.753 15:00:55 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3806269 00:20:09.753 15:00:55 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:09.753 15:00:55 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:09.753 15:00:55 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3806269 00:20:09.753 15:00:55 -- common/autotest_common.sh@817 -- # '[' -z 3806269 ']' 00:20:09.753 15:00:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.753 15:00:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:09.753 15:00:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.753 15:00:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:09.753 15:00:55 -- common/autotest_common.sh@10 -- # set +x 00:20:09.753 15:00:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:09.753 15:00:55 -- common/autotest_common.sh@850 -- # return 0 00:20:09.753 15:00:55 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:09.753 15:00:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.753 15:00:55 -- common/autotest_common.sh@10 -- # set +x 00:20:09.753 15:00:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.753 15:00:55 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:09.753 15:00:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.753 15:00:55 -- common/autotest_common.sh@10 -- # set +x 00:20:09.753 Malloc0 00:20:09.753 15:00:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.753 15:00:55 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:09.753 15:00:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.753 15:00:55 -- common/autotest_common.sh@10 -- # set +x 00:20:09.753 15:00:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.753 15:00:55 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:09.753 15:00:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.753 15:00:55 -- common/autotest_common.sh@10 -- # set +x 00:20:09.753 15:00:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.753 15:00:55 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:09.753 15:00:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.753 15:00:55 -- common/autotest_common.sh@10 -- # set +x 00:20:09.753 15:00:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.753 15:00:55 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:20:09.753 15:00:55 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:20:41.826 Fuzzing completed. Shutting down the fuzz application 00:20:41.826 00:20:41.826 Dumping successful admin opcodes: 00:20:41.826 8, 9, 10, 24, 00:20:41.826 Dumping successful io opcodes: 00:20:41.826 0, 9, 00:20:41.826 NS: 0x200003aeff00 I/O qp, Total commands completed: 464907, total successful commands: 2688, random_seed: 3955451648 00:20:41.826 NS: 0x200003aeff00 admin qp, Total commands completed: 55856, total successful commands: 444, random_seed: 3661616192 00:20:41.826 15:01:26 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:20:41.826 Fuzzing completed. Shutting down the fuzz application 00:20:41.826 00:20:41.826 Dumping successful admin opcodes: 00:20:41.826 24, 00:20:41.826 Dumping successful io opcodes: 00:20:41.826 00:20:41.826 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3525149774 00:20:41.826 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3525291914 00:20:41.826 15:01:27 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:41.826 15:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.826 15:01:27 -- common/autotest_common.sh@10 -- # set +x 00:20:41.826 15:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.826 15:01:27 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:20:41.826 15:01:27 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:20:41.826 15:01:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:41.826 15:01:27 -- nvmf/common.sh@117 -- # sync 00:20:41.826 15:01:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:41.826 15:01:27 -- nvmf/common.sh@120 -- # set +e 00:20:41.826 15:01:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:41.826 15:01:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:41.826 rmmod nvme_tcp 00:20:41.826 rmmod nvme_fabrics 00:20:42.085 rmmod nvme_keyring 00:20:42.085 15:01:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:42.085 15:01:27 -- nvmf/common.sh@124 -- # set -e 00:20:42.085 15:01:27 -- nvmf/common.sh@125 -- # return 0 00:20:42.086 15:01:27 -- nvmf/common.sh@478 -- # '[' -n 3806269 ']' 00:20:42.086 15:01:27 -- nvmf/common.sh@479 -- # killprocess 3806269 00:20:42.086 15:01:27 -- common/autotest_common.sh@936 -- # '[' -z 3806269 ']' 00:20:42.086 15:01:27 -- common/autotest_common.sh@940 -- # kill -0 3806269 00:20:42.086 15:01:27 -- common/autotest_common.sh@941 -- # uname 00:20:42.086 15:01:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:42.086 15:01:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3806269 00:20:42.086 15:01:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:42.086 15:01:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:42.086 15:01:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3806269' 00:20:42.086 killing process with pid 3806269 00:20:42.086 15:01:27 -- common/autotest_common.sh@955 -- # kill 3806269 00:20:42.086 15:01:27 -- common/autotest_common.sh@960 -- # wait 3806269 00:20:42.342 15:01:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:42.342 15:01:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:42.342 15:01:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:42.342 15:01:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:42.342 15:01:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:42.342 15:01:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.342 15:01:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.342 15:01:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.248 15:01:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:44.248 15:01:29 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:20:44.248 00:20:44.248 real 0m37.183s 00:20:44.248 user 0m50.635s 00:20:44.248 sys 0m15.740s 00:20:44.248 15:01:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:44.248 15:01:29 -- common/autotest_common.sh@10 -- # set +x 00:20:44.248 ************************************ 00:20:44.248 END TEST nvmf_fuzz 00:20:44.248 ************************************ 00:20:44.248 15:01:29 -- nvmf/nvmf.sh@66 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:44.248 15:01:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:44.248 15:01:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:44.248 15:01:29 -- common/autotest_common.sh@10 -- # set +x 00:20:44.506 ************************************ 00:20:44.506 START TEST nvmf_multiconnection 00:20:44.506 ************************************ 00:20:44.506 15:01:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:44.506 * Looking for test storage... 00:20:44.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:44.506 15:01:30 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:44.506 15:01:30 -- nvmf/common.sh@7 -- # uname -s 00:20:44.506 15:01:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.506 15:01:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.506 15:01:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.506 15:01:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.506 15:01:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.506 15:01:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.506 15:01:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.506 15:01:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.506 15:01:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.506 15:01:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.506 15:01:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:44.506 15:01:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:44.506 15:01:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.506 15:01:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.506 15:01:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:44.506 15:01:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.506 15:01:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:44.506 15:01:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.506 15:01:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.506 15:01:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.506 15:01:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.506 15:01:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.506 15:01:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.506 15:01:30 -- paths/export.sh@5 -- # export PATH 00:20:44.506 15:01:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.506 15:01:30 -- nvmf/common.sh@47 -- # : 0 00:20:44.506 15:01:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:44.506 15:01:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:44.506 15:01:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:44.506 15:01:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.506 15:01:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.506 15:01:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:44.506 15:01:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:44.506 15:01:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:44.506 15:01:30 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:44.506 15:01:30 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:44.506 15:01:30 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:20:44.506 15:01:30 -- target/multiconnection.sh@16 -- # nvmftestinit 00:20:44.506 15:01:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:44.506 15:01:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.506 15:01:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:44.506 15:01:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:44.506 15:01:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:44.506 15:01:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.506 15:01:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:44.506 15:01:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.506 15:01:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:44.506 15:01:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:44.506 15:01:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:44.506 15:01:30 -- common/autotest_common.sh@10 -- # set +x 00:20:47.033 15:01:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:47.033 15:01:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:47.033 15:01:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:47.033 15:01:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:47.033 15:01:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:47.033 15:01:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:47.033 15:01:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:47.033 15:01:32 -- nvmf/common.sh@295 -- # net_devs=() 00:20:47.033 15:01:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:47.033 15:01:32 -- nvmf/common.sh@296 -- # e810=() 00:20:47.033 15:01:32 -- nvmf/common.sh@296 -- # local -ga e810 00:20:47.033 15:01:32 -- nvmf/common.sh@297 -- # x722=() 00:20:47.033 15:01:32 -- nvmf/common.sh@297 -- # local -ga x722 00:20:47.033 15:01:32 -- nvmf/common.sh@298 -- # mlx=() 00:20:47.033 15:01:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:47.033 15:01:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.033 15:01:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.033 15:01:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.033 15:01:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.033 15:01:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.033 15:01:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.033 15:01:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.034 15:01:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.034 15:01:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.034 15:01:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.034 15:01:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.034 15:01:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:47.034 15:01:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:47.034 15:01:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:47.034 15:01:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:47.034 15:01:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:47.034 15:01:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:47.034 15:01:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.034 15:01:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:47.034 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:47.034 15:01:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:47.034 15:01:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:47.034 15:01:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.034 15:01:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.034 15:01:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:47.034 15:01:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.034 15:01:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:47.034 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:47.034 15:01:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:47.034 15:01:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:47.034 15:01:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.034 15:01:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.034 15:01:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:47.034 15:01:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:47.034 15:01:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:47.034 15:01:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:47.034 15:01:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.034 15:01:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.034 15:01:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:47.034 15:01:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.034 15:01:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:47.034 Found net devices under 0000:84:00.0: cvl_0_0 00:20:47.034 15:01:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.034 15:01:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.034 15:01:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.034 15:01:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:47.034 15:01:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.034 15:01:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:47.034 Found net devices under 0000:84:00.1: cvl_0_1 00:20:47.034 15:01:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.034 15:01:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:47.034 15:01:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:47.034 15:01:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:47.034 15:01:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:47.034 15:01:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:47.034 15:01:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.034 15:01:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.034 15:01:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.034 15:01:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:47.034 15:01:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.034 15:01:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.034 15:01:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:47.034 15:01:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.034 15:01:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.034 15:01:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:47.034 15:01:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:47.034 15:01:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.034 15:01:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.034 15:01:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.034 15:01:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.034 15:01:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:47.034 15:01:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.034 15:01:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.034 15:01:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.034 15:01:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:47.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:20:47.034 00:20:47.034 --- 10.0.0.2 ping statistics --- 00:20:47.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.034 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:20:47.034 15:01:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:20:47.034 00:20:47.034 --- 10.0.0.1 ping statistics --- 00:20:47.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.034 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:20:47.034 15:01:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.034 15:01:32 -- nvmf/common.sh@411 -- # return 0 00:20:47.034 15:01:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:47.034 15:01:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.034 15:01:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:47.034 15:01:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:47.034 15:01:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.034 15:01:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:47.034 15:01:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:47.034 15:01:32 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:20:47.034 15:01:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:47.034 15:01:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:47.034 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.034 15:01:32 -- nvmf/common.sh@470 -- # nvmfpid=3812016 00:20:47.034 15:01:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:47.034 15:01:32 -- nvmf/common.sh@471 -- # waitforlisten 3812016 00:20:47.034 15:01:32 -- common/autotest_common.sh@817 -- # '[' -z 3812016 ']' 00:20:47.034 15:01:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.034 15:01:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:47.034 15:01:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.034 15:01:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:47.034 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.034 [2024-04-26 15:01:32.469761] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:20:47.034 [2024-04-26 15:01:32.469835] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.034 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.034 [2024-04-26 15:01:32.508210] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:47.034 [2024-04-26 15:01:32.534811] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:47.034 [2024-04-26 15:01:32.620300] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.034 [2024-04-26 15:01:32.620359] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.034 [2024-04-26 15:01:32.620372] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.034 [2024-04-26 15:01:32.620383] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.034 [2024-04-26 15:01:32.620393] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.034 [2024-04-26 15:01:32.620443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.034 [2024-04-26 15:01:32.620500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.034 [2024-04-26 15:01:32.620566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.034 [2024-04-26 15:01:32.620568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.034 15:01:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:47.034 15:01:32 -- common/autotest_common.sh@850 -- # return 0 00:20:47.034 15:01:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:47.034 15:01:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:47.034 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.034 15:01:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.034 15:01:32 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:47.034 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.034 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.034 [2024-04-26 15:01:32.758535] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.034 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.034 15:01:32 -- target/multiconnection.sh@21 -- # seq 1 11 00:20:47.034 15:01:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.034 15:01:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:47.034 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.034 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.292 Malloc1 00:20:47.292 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.292 15:01:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:20:47.292 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.292 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.292 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.292 15:01:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:47.292 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.292 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.292 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.292 15:01:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:47.292 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.292 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.292 [2024-04-26 15:01:32.813536] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.292 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.292 15:01:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.292 15:01:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:20:47.292 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.292 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.292 Malloc2 00:20:47.292 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.292 15:01:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:47.292 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.292 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.292 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.292 15:01:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:20:47.292 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.292 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.292 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.292 15:01:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:47.292 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.292 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.292 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.292 15:01:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.292 15:01:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:20:47.292 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.292 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.292 Malloc3 00:20:47.292 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.292 15:01:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:20:47.292 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.292 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.292 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.292 15:01:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:20:47.292 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.293 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.293 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.293 15:01:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:20:47.293 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.293 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.293 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.293 15:01:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.293 15:01:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:20:47.293 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.293 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.293 Malloc4 00:20:47.293 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.293 15:01:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:20:47.293 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.293 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.293 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.293 15:01:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:20:47.293 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.293 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.293 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.293 15:01:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:20:47.293 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.293 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.293 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.293 15:01:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.293 15:01:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:20:47.293 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.293 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.293 Malloc5 00:20:47.293 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.293 15:01:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:20:47.293 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.293 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.293 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.293 15:01:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:20:47.293 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.293 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.293 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.293 15:01:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:20:47.293 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.293 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.293 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.293 15:01:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.293 15:01:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:20:47.293 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.293 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.293 Malloc6 00:20:47.293 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.293 15:01:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:20:47.293 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.293 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.293 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.293 15:01:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:20:47.293 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.293 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.552 15:01:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 Malloc7 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.552 15:01:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 Malloc8 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.552 15:01:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 Malloc9 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.552 15:01:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 Malloc10 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.552 15:01:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 Malloc11 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.552 15:01:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:20:47.552 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.552 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:20:47.809 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.809 15:01:33 -- target/multiconnection.sh@28 -- # seq 1 11 00:20:47.809 15:01:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.809 15:01:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:48.372 15:01:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:20:48.372 15:01:33 -- common/autotest_common.sh@1184 -- # local i=0 00:20:48.372 15:01:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:48.372 15:01:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:48.372 15:01:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:50.269 15:01:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:50.269 15:01:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:50.269 15:01:35 -- common/autotest_common.sh@1193 -- # grep -c SPDK1 00:20:50.269 15:01:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:50.269 15:01:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:50.269 15:01:35 -- common/autotest_common.sh@1194 -- # return 0 00:20:50.269 15:01:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:50.269 15:01:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:20:51.200 15:01:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:20:51.200 15:01:36 -- common/autotest_common.sh@1184 -- # local i=0 00:20:51.200 15:01:36 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:51.200 15:01:36 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:51.200 15:01:36 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:53.131 15:01:38 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:53.131 15:01:38 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:53.131 15:01:38 -- common/autotest_common.sh@1193 -- # grep -c SPDK2 00:20:53.131 15:01:38 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:53.131 15:01:38 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:53.131 15:01:38 -- common/autotest_common.sh@1194 -- # return 0 00:20:53.131 15:01:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:53.131 15:01:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:20:53.696 15:01:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:20:53.696 15:01:39 -- common/autotest_common.sh@1184 -- # local i=0 00:20:53.696 15:01:39 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:53.696 15:01:39 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:53.696 15:01:39 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:55.593 15:01:41 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:55.849 15:01:41 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:55.850 15:01:41 -- common/autotest_common.sh@1193 -- # grep -c SPDK3 00:20:55.850 15:01:41 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:55.850 15:01:41 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:55.850 15:01:41 -- common/autotest_common.sh@1194 -- # return 0 00:20:55.850 15:01:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:55.850 15:01:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:20:56.414 15:01:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:20:56.414 15:01:42 -- common/autotest_common.sh@1184 -- # local i=0 00:20:56.414 15:01:42 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:56.414 15:01:42 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:56.414 15:01:42 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:58.942 15:01:44 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:58.942 15:01:44 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:58.942 15:01:44 -- common/autotest_common.sh@1193 -- # grep -c SPDK4 00:20:58.942 15:01:44 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:58.942 15:01:44 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:58.942 15:01:44 -- common/autotest_common.sh@1194 -- # return 0 00:20:58.942 15:01:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:58.942 15:01:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:20:59.200 15:01:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:20:59.200 15:01:44 -- common/autotest_common.sh@1184 -- # local i=0 00:20:59.200 15:01:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:59.200 15:01:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:59.200 15:01:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:01.724 15:01:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:01.724 15:01:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:01.724 15:01:46 -- common/autotest_common.sh@1193 -- # grep -c SPDK5 00:21:01.724 15:01:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:01.724 15:01:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:01.724 15:01:46 -- common/autotest_common.sh@1194 -- # return 0 00:21:01.724 15:01:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:01.724 15:01:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:21:01.981 15:01:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:01.981 15:01:47 -- common/autotest_common.sh@1184 -- # local i=0 00:21:01.981 15:01:47 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:01.981 15:01:47 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:01.981 15:01:47 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:03.877 15:01:49 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:03.877 15:01:49 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:03.877 15:01:49 -- common/autotest_common.sh@1193 -- # grep -c SPDK6 00:21:03.877 15:01:49 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:03.877 15:01:49 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:03.877 15:01:49 -- common/autotest_common.sh@1194 -- # return 0 00:21:03.877 15:01:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:03.877 15:01:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:21:04.809 15:01:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:04.809 15:01:50 -- common/autotest_common.sh@1184 -- # local i=0 00:21:04.809 15:01:50 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:04.809 15:01:50 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:04.810 15:01:50 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:06.707 15:01:52 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:06.707 15:01:52 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:06.707 15:01:52 -- common/autotest_common.sh@1193 -- # grep -c SPDK7 00:21:06.707 15:01:52 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:06.707 15:01:52 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:06.707 15:01:52 -- common/autotest_common.sh@1194 -- # return 0 00:21:06.707 15:01:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:06.707 15:01:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:21:07.642 15:01:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:07.642 15:01:53 -- common/autotest_common.sh@1184 -- # local i=0 00:21:07.642 15:01:53 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:07.642 15:01:53 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:07.642 15:01:53 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:09.540 15:01:55 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:09.540 15:01:55 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:09.540 15:01:55 -- common/autotest_common.sh@1193 -- # grep -c SPDK8 00:21:09.540 15:01:55 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:09.540 15:01:55 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:09.540 15:01:55 -- common/autotest_common.sh@1194 -- # return 0 00:21:09.540 15:01:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:09.540 15:01:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:21:10.472 15:01:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:21:10.472 15:01:56 -- common/autotest_common.sh@1184 -- # local i=0 00:21:10.472 15:01:56 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:10.472 15:01:56 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:10.472 15:01:56 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:12.429 15:01:58 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:12.429 15:01:58 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:12.429 15:01:58 -- common/autotest_common.sh@1193 -- # grep -c SPDK9 00:21:12.429 15:01:58 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:12.429 15:01:58 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:12.429 15:01:58 -- common/autotest_common.sh@1194 -- # return 0 00:21:12.429 15:01:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:12.429 15:01:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:21:13.361 15:01:58 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:13.361 15:01:58 -- common/autotest_common.sh@1184 -- # local i=0 00:21:13.361 15:01:58 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:13.361 15:01:58 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:13.361 15:01:58 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:15.255 15:02:00 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:15.255 15:02:00 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:15.255 15:02:00 -- common/autotest_common.sh@1193 -- # grep -c SPDK10 00:21:15.256 15:02:00 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:15.256 15:02:00 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:15.256 15:02:00 -- common/autotest_common.sh@1194 -- # return 0 00:21:15.256 15:02:00 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:15.256 15:02:00 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:21:16.183 15:02:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:21:16.184 15:02:01 -- common/autotest_common.sh@1184 -- # local i=0 00:21:16.184 15:02:01 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:16.184 15:02:01 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:16.184 15:02:01 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:18.706 15:02:03 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:18.706 15:02:03 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:18.706 15:02:03 -- common/autotest_common.sh@1193 -- # grep -c SPDK11 00:21:18.706 15:02:03 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:18.706 15:02:03 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:18.706 15:02:03 -- common/autotest_common.sh@1194 -- # return 0 00:21:18.706 15:02:03 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:21:18.706 [global] 00:21:18.706 thread=1 00:21:18.706 invalidate=1 00:21:18.706 rw=read 00:21:18.706 time_based=1 00:21:18.706 runtime=10 00:21:18.706 ioengine=libaio 00:21:18.706 direct=1 00:21:18.706 bs=262144 00:21:18.706 iodepth=64 00:21:18.706 norandommap=1 00:21:18.706 numjobs=1 00:21:18.706 00:21:18.706 [job0] 00:21:18.706 filename=/dev/nvme0n1 00:21:18.706 [job1] 00:21:18.706 filename=/dev/nvme10n1 00:21:18.706 [job2] 00:21:18.706 filename=/dev/nvme1n1 00:21:18.706 [job3] 00:21:18.706 filename=/dev/nvme2n1 00:21:18.706 [job4] 00:21:18.706 filename=/dev/nvme3n1 00:21:18.706 [job5] 00:21:18.706 filename=/dev/nvme4n1 00:21:18.706 [job6] 00:21:18.706 filename=/dev/nvme5n1 00:21:18.706 [job7] 00:21:18.706 filename=/dev/nvme6n1 00:21:18.706 [job8] 00:21:18.706 filename=/dev/nvme7n1 00:21:18.706 [job9] 00:21:18.706 filename=/dev/nvme8n1 00:21:18.706 [job10] 00:21:18.706 filename=/dev/nvme9n1 00:21:18.706 Could not set queue depth (nvme0n1) 00:21:18.706 Could not set queue depth (nvme10n1) 00:21:18.706 Could not set queue depth (nvme1n1) 00:21:18.706 Could not set queue depth (nvme2n1) 00:21:18.706 Could not set queue depth (nvme3n1) 00:21:18.706 Could not set queue depth (nvme4n1) 00:21:18.706 Could not set queue depth (nvme5n1) 00:21:18.706 Could not set queue depth (nvme6n1) 00:21:18.706 Could not set queue depth (nvme7n1) 00:21:18.706 Could not set queue depth (nvme8n1) 00:21:18.706 Could not set queue depth (nvme9n1) 00:21:18.706 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.706 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.706 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.706 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.706 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.706 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.706 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.706 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.706 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.706 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.706 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.706 fio-3.35 00:21:18.706 Starting 11 threads 00:21:30.908 00:21:30.908 job0: (groupid=0, jobs=1): err= 0: pid=3816268: Fri Apr 26 15:02:14 2024 00:21:30.908 read: IOPS=800, BW=200MiB/s (210MB/s)(2014MiB/10063msec) 00:21:30.908 slat (usec): min=9, max=113452, avg=662.69, stdev=3096.61 00:21:30.908 clat (usec): min=1934, max=282976, avg=79141.38, stdev=45474.39 00:21:30.908 lat (usec): min=1960, max=286352, avg=79804.07, stdev=45715.80 00:21:30.908 clat percentiles (msec): 00:21:30.908 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 26], 20.00th=[ 40], 00:21:30.908 | 30.00th=[ 51], 40.00th=[ 63], 50.00th=[ 74], 60.00th=[ 87], 00:21:30.908 | 70.00th=[ 100], 80.00th=[ 114], 90.00th=[ 134], 95.00th=[ 161], 00:21:30.908 | 99.00th=[ 224], 99.50th=[ 234], 99.90th=[ 249], 99.95th=[ 251], 00:21:30.908 | 99.99th=[ 284] 00:21:30.908 bw ( KiB/s): min=99328, max=341504, per=10.54%, avg=204651.60, stdev=67525.28, samples=20 00:21:30.908 iops : min= 388, max= 1334, avg=799.40, stdev=263.75, samples=20 00:21:30.908 lat (msec) : 2=0.09%, 4=0.84%, 10=1.63%, 20=4.63%, 50=22.32% 00:21:30.908 lat (msec) : 100=40.81%, 250=29.68%, 500=0.01% 00:21:30.908 cpu : usr=0.34%, sys=2.35%, ctx=1841, majf=0, minf=4097 00:21:30.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:30.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.908 issued rwts: total=8057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.908 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.908 job1: (groupid=0, jobs=1): err= 0: pid=3816269: Fri Apr 26 15:02:14 2024 00:21:30.908 read: IOPS=734, BW=184MiB/s (193MB/s)(1853MiB/10088msec) 00:21:30.908 slat (usec): min=9, max=103298, avg=517.19, stdev=3289.57 00:21:30.908 clat (usec): min=853, max=268887, avg=86481.70, stdev=60164.37 00:21:30.908 lat (usec): min=879, max=303983, avg=86998.88, stdev=60513.76 00:21:30.908 clat percentiles (msec): 00:21:30.908 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 21], 20.00th=[ 35], 00:21:30.908 | 30.00th=[ 46], 40.00th=[ 58], 50.00th=[ 72], 60.00th=[ 92], 00:21:30.908 | 70.00th=[ 111], 80.00th=[ 133], 90.00th=[ 186], 95.00th=[ 211], 00:21:30.908 | 99.00th=[ 241], 99.50th=[ 247], 99.90th=[ 255], 99.95th=[ 257], 00:21:30.908 | 99.99th=[ 271] 00:21:30.908 bw ( KiB/s): min=75776, max=342528, per=9.68%, avg=188060.55, stdev=72025.29, samples=20 00:21:30.908 iops : min= 296, max= 1338, avg=734.55, stdev=281.31, samples=20 00:21:30.908 lat (usec) : 1000=0.07% 00:21:30.908 lat (msec) : 2=0.30%, 4=0.24%, 10=3.68%, 20=5.64%, 50=23.65% 00:21:30.908 lat (msec) : 100=30.91%, 250=35.20%, 500=0.30% 00:21:30.908 cpu : usr=0.30%, sys=2.13%, ctx=1936, majf=0, minf=4097 00:21:30.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:21:30.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.908 issued rwts: total=7411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.908 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.908 job2: (groupid=0, jobs=1): err= 0: pid=3816271: Fri Apr 26 15:02:14 2024 00:21:30.908 read: IOPS=576, BW=144MiB/s (151MB/s)(1454MiB/10090msec) 00:21:30.908 slat (usec): min=9, max=179449, avg=1017.99, stdev=5674.30 00:21:30.908 clat (usec): min=918, max=403618, avg=109868.08, stdev=65276.04 00:21:30.908 lat (usec): min=940, max=403640, avg=110886.07, stdev=66131.89 00:21:30.908 clat percentiles (msec): 00:21:30.908 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 14], 20.00th=[ 42], 00:21:30.908 | 30.00th=[ 73], 40.00th=[ 99], 50.00th=[ 113], 60.00th=[ 126], 00:21:30.908 | 70.00th=[ 140], 80.00th=[ 171], 90.00th=[ 201], 95.00th=[ 218], 00:21:30.908 | 99.00th=[ 251], 99.50th=[ 271], 99.90th=[ 279], 99.95th=[ 351], 00:21:30.908 | 99.99th=[ 405] 00:21:30.908 bw ( KiB/s): min=76288, max=257536, per=7.58%, avg=147223.70, stdev=48757.99, samples=20 00:21:30.908 iops : min= 298, max= 1006, avg=575.05, stdev=190.46, samples=20 00:21:30.908 lat (usec) : 1000=0.07% 00:21:30.908 lat (msec) : 2=0.50%, 4=1.01%, 10=5.33%, 20=6.86%, 50=7.96% 00:21:30.908 lat (msec) : 100=18.73%, 250=58.44%, 500=1.10% 00:21:30.908 cpu : usr=0.26%, sys=1.75%, ctx=1524, majf=0, minf=4097 00:21:30.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:30.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.908 issued rwts: total=5815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.908 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.908 job3: (groupid=0, jobs=1): err= 0: pid=3816272: Fri Apr 26 15:02:14 2024 00:21:30.908 read: IOPS=694, BW=174MiB/s (182MB/s)(1747MiB/10057msec) 00:21:30.908 slat (usec): min=10, max=93203, avg=879.86, stdev=4335.18 00:21:30.908 clat (usec): min=721, max=318205, avg=91142.58, stdev=56812.54 00:21:30.908 lat (usec): min=744, max=318232, avg=92022.44, stdev=57464.07 00:21:30.908 clat percentiles (msec): 00:21:30.908 | 1.00th=[ 8], 5.00th=[ 21], 10.00th=[ 31], 20.00th=[ 43], 00:21:30.908 | 30.00th=[ 54], 40.00th=[ 62], 50.00th=[ 78], 60.00th=[ 95], 00:21:30.908 | 70.00th=[ 114], 80.00th=[ 138], 90.00th=[ 180], 95.00th=[ 211], 00:21:30.908 | 99.00th=[ 243], 99.50th=[ 253], 99.90th=[ 279], 99.95th=[ 300], 00:21:30.908 | 99.99th=[ 317] 00:21:30.908 bw ( KiB/s): min=70144, max=304640, per=9.12%, avg=177242.70, stdev=71780.02, samples=20 00:21:30.908 iops : min= 274, max= 1190, avg=692.30, stdev=280.42, samples=20 00:21:30.908 lat (usec) : 750=0.03%, 1000=0.04% 00:21:30.908 lat (msec) : 2=0.07%, 4=0.21%, 10=1.55%, 20=2.95%, 50=20.41% 00:21:30.908 lat (msec) : 100=37.44%, 250=36.61%, 500=0.69% 00:21:30.908 cpu : usr=0.26%, sys=1.84%, ctx=1588, majf=0, minf=4097 00:21:30.909 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:30.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.909 issued rwts: total=6987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.909 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.909 job4: (groupid=0, jobs=1): err= 0: pid=3816273: Fri Apr 26 15:02:14 2024 00:21:30.909 read: IOPS=589, BW=147MiB/s (154MB/s)(1482MiB/10060msec) 00:21:30.909 slat (usec): min=10, max=131040, avg=914.15, stdev=5003.75 00:21:30.909 clat (usec): min=1383, max=317005, avg=107565.92, stdev=58857.87 00:21:30.909 lat (usec): min=1406, max=317041, avg=108480.07, stdev=59506.32 00:21:30.909 clat percentiles (msec): 00:21:30.909 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 29], 20.00th=[ 61], 00:21:30.909 | 30.00th=[ 72], 40.00th=[ 88], 50.00th=[ 103], 60.00th=[ 116], 00:21:30.909 | 70.00th=[ 132], 80.00th=[ 163], 90.00th=[ 197], 95.00th=[ 211], 00:21:30.909 | 99.00th=[ 241], 99.50th=[ 249], 99.90th=[ 268], 99.95th=[ 317], 00:21:30.909 | 99.99th=[ 317] 00:21:30.909 bw ( KiB/s): min=68608, max=242688, per=7.73%, avg=150114.50, stdev=51073.56, samples=20 00:21:30.909 iops : min= 268, max= 948, avg=586.30, stdev=199.52, samples=20 00:21:30.909 lat (msec) : 2=0.08%, 4=0.52%, 10=3.31%, 20=3.41%, 50=8.42% 00:21:30.909 lat (msec) : 100=32.76%, 250=51.01%, 500=0.49% 00:21:30.909 cpu : usr=0.33%, sys=1.56%, ctx=1558, majf=0, minf=4097 00:21:30.909 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:21:30.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.909 issued rwts: total=5928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.909 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.909 job5: (groupid=0, jobs=1): err= 0: pid=3816274: Fri Apr 26 15:02:14 2024 00:21:30.909 read: IOPS=614, BW=154MiB/s (161MB/s)(1551MiB/10086msec) 00:21:30.909 slat (usec): min=9, max=197154, avg=818.09, stdev=5971.33 00:21:30.909 clat (usec): min=760, max=345685, avg=103130.18, stdev=62874.70 00:21:30.909 lat (usec): min=786, max=391735, avg=103948.27, stdev=63606.45 00:21:30.909 clat percentiles (msec): 00:21:30.909 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 25], 20.00th=[ 45], 00:21:30.909 | 30.00th=[ 64], 40.00th=[ 84], 50.00th=[ 99], 60.00th=[ 112], 00:21:30.909 | 70.00th=[ 125], 80.00th=[ 155], 90.00th=[ 194], 95.00th=[ 222], 00:21:30.909 | 99.00th=[ 264], 99.50th=[ 284], 99.90th=[ 292], 99.95th=[ 300], 00:21:30.909 | 99.99th=[ 347] 00:21:30.909 bw ( KiB/s): min=63615, max=277504, per=8.09%, avg=157127.50, stdev=57021.13, samples=20 00:21:30.909 iops : min= 248, max= 1084, avg=613.70, stdev=222.75, samples=20 00:21:30.909 lat (usec) : 1000=0.16% 00:21:30.909 lat (msec) : 2=0.40%, 4=0.32%, 10=2.48%, 20=4.14%, 50=15.83% 00:21:30.909 lat (msec) : 100=27.48%, 250=46.90%, 500=2.27% 00:21:30.909 cpu : usr=0.23%, sys=1.64%, ctx=1669, majf=0, minf=4097 00:21:30.909 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:21:30.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.909 issued rwts: total=6202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.909 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.909 job6: (groupid=0, jobs=1): err= 0: pid=3816275: Fri Apr 26 15:02:14 2024 00:21:30.909 read: IOPS=629, BW=157MiB/s (165MB/s)(1583MiB/10058msec) 00:21:30.909 slat (usec): min=9, max=72822, avg=716.79, stdev=3479.21 00:21:30.909 clat (usec): min=834, max=263702, avg=100811.19, stdev=58917.78 00:21:30.909 lat (usec): min=847, max=263719, avg=101527.98, stdev=59269.99 00:21:30.909 clat percentiles (msec): 00:21:30.909 | 1.00th=[ 3], 5.00th=[ 17], 10.00th=[ 28], 20.00th=[ 47], 00:21:30.909 | 30.00th=[ 68], 40.00th=[ 82], 50.00th=[ 92], 60.00th=[ 106], 00:21:30.909 | 70.00th=[ 122], 80.00th=[ 144], 90.00th=[ 197], 95.00th=[ 220], 00:21:30.909 | 99.00th=[ 241], 99.50th=[ 245], 99.90th=[ 255], 99.95th=[ 259], 00:21:30.909 | 99.99th=[ 264] 00:21:30.909 bw ( KiB/s): min=70144, max=290816, per=8.26%, avg=160479.40, stdev=54898.20, samples=20 00:21:30.909 iops : min= 274, max= 1136, avg=626.80, stdev=214.45, samples=20 00:21:30.909 lat (usec) : 1000=0.03% 00:21:30.909 lat (msec) : 2=0.44%, 4=0.90%, 10=1.45%, 20=4.09%, 50=14.15% 00:21:30.909 lat (msec) : 100=35.72%, 250=42.93%, 500=0.28% 00:21:30.909 cpu : usr=0.36%, sys=1.83%, ctx=1766, majf=0, minf=3721 00:21:30.909 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:21:30.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.909 issued rwts: total=6333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.909 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.909 job7: (groupid=0, jobs=1): err= 0: pid=3816276: Fri Apr 26 15:02:14 2024 00:21:30.909 read: IOPS=676, BW=169MiB/s (177MB/s)(1695MiB/10026msec) 00:21:30.909 slat (usec): min=10, max=160457, avg=991.48, stdev=5191.13 00:21:30.909 clat (usec): min=1005, max=361680, avg=93524.67, stdev=61510.66 00:21:30.909 lat (usec): min=1022, max=361715, avg=94516.15, stdev=62357.88 00:21:30.909 clat percentiles (usec): 00:21:30.909 | 1.00th=[ 1680], 5.00th=[ 8717], 10.00th=[ 22938], 20.00th=[ 38536], 00:21:30.909 | 30.00th=[ 54264], 40.00th=[ 66847], 50.00th=[ 81265], 60.00th=[100140], 00:21:30.909 | 70.00th=[116917], 80.00th=[145753], 90.00th=[191890], 95.00th=[212861], 00:21:30.909 | 99.00th=[250610], 99.50th=[263193], 99.90th=[274727], 99.95th=[362808], 00:21:30.909 | 99.99th=[362808] 00:21:30.909 bw ( KiB/s): min=62464, max=312832, per=8.85%, avg=171916.95, stdev=67417.36, samples=20 00:21:30.909 iops : min= 244, max= 1222, avg=671.50, stdev=263.33, samples=20 00:21:30.909 lat (msec) : 2=1.24%, 4=1.56%, 10=2.64%, 20=3.44%, 50=18.63% 00:21:30.909 lat (msec) : 100=32.67%, 250=38.67%, 500=1.15% 00:21:30.909 cpu : usr=0.32%, sys=2.13%, ctx=1765, majf=0, minf=4097 00:21:30.909 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:30.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.909 issued rwts: total=6780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.909 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.909 job8: (groupid=0, jobs=1): err= 0: pid=3816277: Fri Apr 26 15:02:14 2024 00:21:30.909 read: IOPS=710, BW=178MiB/s (186MB/s)(1792MiB/10085msec) 00:21:30.909 slat (usec): min=10, max=151838, avg=743.93, stdev=4669.36 00:21:30.909 clat (usec): min=724, max=335214, avg=89178.14, stdev=58363.95 00:21:30.909 lat (usec): min=756, max=365862, avg=89922.07, stdev=59033.23 00:21:30.909 clat percentiles (msec): 00:21:30.909 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 23], 20.00th=[ 36], 00:21:30.909 | 30.00th=[ 51], 40.00th=[ 65], 50.00th=[ 80], 60.00th=[ 95], 00:21:30.909 | 70.00th=[ 114], 80.00th=[ 134], 90.00th=[ 178], 95.00th=[ 209], 00:21:30.909 | 99.00th=[ 251], 99.50th=[ 264], 99.90th=[ 284], 99.95th=[ 309], 00:21:30.909 | 99.99th=[ 334] 00:21:30.909 bw ( KiB/s): min=78336, max=323584, per=9.36%, avg=181851.10, stdev=58027.56, samples=20 00:21:30.909 iops : min= 306, max= 1264, avg=710.35, stdev=226.67, samples=20 00:21:30.909 lat (usec) : 750=0.01%, 1000=0.06% 00:21:30.909 lat (msec) : 2=0.36%, 4=0.15%, 10=3.78%, 20=3.43%, 50=22.28% 00:21:30.909 lat (msec) : 100=32.83%, 250=36.02%, 500=1.07% 00:21:30.909 cpu : usr=0.31%, sys=2.08%, ctx=1774, majf=0, minf=4097 00:21:30.909 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:21:30.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.909 issued rwts: total=7168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.909 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.909 job9: (groupid=0, jobs=1): err= 0: pid=3816278: Fri Apr 26 15:02:14 2024 00:21:30.909 read: IOPS=840, BW=210MiB/s (220MB/s)(2114MiB/10058msec) 00:21:30.909 slat (usec): min=10, max=152459, avg=569.36, stdev=3122.50 00:21:30.909 clat (usec): min=770, max=349081, avg=75438.28, stdev=51744.37 00:21:30.909 lat (usec): min=794, max=349118, avg=76007.64, stdev=51911.77 00:21:30.909 clat percentiles (msec): 00:21:30.909 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 19], 20.00th=[ 32], 00:21:30.909 | 30.00th=[ 41], 40.00th=[ 55], 50.00th=[ 67], 60.00th=[ 81], 00:21:30.909 | 70.00th=[ 94], 80.00th=[ 113], 90.00th=[ 142], 95.00th=[ 184], 00:21:30.910 | 99.00th=[ 236], 99.50th=[ 271], 99.90th=[ 284], 99.95th=[ 284], 00:21:30.910 | 99.99th=[ 351] 00:21:30.910 bw ( KiB/s): min=116224, max=427008, per=11.06%, avg=214784.00, stdev=72702.76, samples=20 00:21:30.910 iops : min= 454, max= 1668, avg=838.95, stdev=284.01, samples=20 00:21:30.910 lat (usec) : 1000=0.02% 00:21:30.910 lat (msec) : 2=0.71%, 4=0.95%, 10=3.68%, 20=5.36%, 50=26.85% 00:21:30.910 lat (msec) : 100=37.38%, 250=24.36%, 500=0.70% 00:21:30.910 cpu : usr=0.33%, sys=2.43%, ctx=2188, majf=0, minf=4097 00:21:30.910 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:30.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.910 issued rwts: total=8454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.910 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.910 job10: (groupid=0, jobs=1): err= 0: pid=3816279: Fri Apr 26 15:02:14 2024 00:21:30.910 read: IOPS=740, BW=185MiB/s (194MB/s)(1858MiB/10033msec) 00:21:30.910 slat (usec): min=9, max=124214, avg=584.87, stdev=3851.46 00:21:30.910 clat (usec): min=1055, max=306853, avg=85726.03, stdev=58565.09 00:21:30.910 lat (usec): min=1078, max=323494, avg=86310.91, stdev=58988.77 00:21:30.910 clat percentiles (msec): 00:21:30.910 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 20], 20.00th=[ 33], 00:21:30.910 | 30.00th=[ 46], 40.00th=[ 62], 50.00th=[ 79], 60.00th=[ 91], 00:21:30.910 | 70.00th=[ 105], 80.00th=[ 130], 90.00th=[ 178], 95.00th=[ 207], 00:21:30.910 | 99.00th=[ 243], 99.50th=[ 257], 99.90th=[ 296], 99.95th=[ 296], 00:21:30.910 | 99.99th=[ 309] 00:21:30.910 bw ( KiB/s): min=72192, max=385024, per=9.71%, avg=188562.60, stdev=72363.51, samples=20 00:21:30.910 iops : min= 282, max= 1504, avg=736.50, stdev=282.71, samples=20 00:21:30.910 lat (msec) : 2=0.11%, 4=0.63%, 10=2.41%, 20=6.92%, 50=22.21% 00:21:30.910 lat (msec) : 100=35.13%, 250=31.94%, 500=0.66% 00:21:30.910 cpu : usr=0.31%, sys=1.99%, ctx=1896, majf=0, minf=4097 00:21:30.910 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:30.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.910 issued rwts: total=7430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.910 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.910 00:21:30.910 Run status group 0 (all jobs): 00:21:30.910 READ: bw=1897MiB/s (1989MB/s), 144MiB/s-210MiB/s (151MB/s-220MB/s), io=18.7GiB (20.1GB), run=10026-10090msec 00:21:30.910 00:21:30.910 Disk stats (read/write): 00:21:30.910 nvme0n1: ios=15902/0, merge=0/0, ticks=1243642/0, in_queue=1243642, util=97.22% 00:21:30.910 nvme10n1: ios=14642/0, merge=0/0, ticks=1246294/0, in_queue=1246294, util=97.48% 00:21:30.910 nvme1n1: ios=11460/0, merge=0/0, ticks=1239173/0, in_queue=1239173, util=97.75% 00:21:30.910 nvme2n1: ios=13744/0, merge=0/0, ticks=1245025/0, in_queue=1245025, util=97.86% 00:21:30.910 nvme3n1: ios=11632/0, merge=0/0, ticks=1244816/0, in_queue=1244816, util=97.95% 00:21:30.910 nvme4n1: ios=12218/0, merge=0/0, ticks=1243163/0, in_queue=1243163, util=98.26% 00:21:30.910 nvme5n1: ios=12479/0, merge=0/0, ticks=1246389/0, in_queue=1246389, util=98.42% 00:21:30.910 nvme6n1: ios=13293/0, merge=0/0, ticks=1242767/0, in_queue=1242767, util=98.51% 00:21:30.910 nvme7n1: ios=14160/0, merge=0/0, ticks=1241781/0, in_queue=1241781, util=98.92% 00:21:30.910 nvme8n1: ios=16679/0, merge=0/0, ticks=1245178/0, in_queue=1245178, util=99.01% 00:21:30.910 nvme9n1: ios=14637/0, merge=0/0, ticks=1246628/0, in_queue=1246628, util=99.19% 00:21:30.910 15:02:14 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:21:30.910 [global] 00:21:30.910 thread=1 00:21:30.910 invalidate=1 00:21:30.910 rw=randwrite 00:21:30.910 time_based=1 00:21:30.910 runtime=10 00:21:30.910 ioengine=libaio 00:21:30.910 direct=1 00:21:30.910 bs=262144 00:21:30.910 iodepth=64 00:21:30.910 norandommap=1 00:21:30.910 numjobs=1 00:21:30.910 00:21:30.910 [job0] 00:21:30.910 filename=/dev/nvme0n1 00:21:30.910 [job1] 00:21:30.910 filename=/dev/nvme10n1 00:21:30.910 [job2] 00:21:30.910 filename=/dev/nvme1n1 00:21:30.910 [job3] 00:21:30.910 filename=/dev/nvme2n1 00:21:30.910 [job4] 00:21:30.910 filename=/dev/nvme3n1 00:21:30.910 [job5] 00:21:30.910 filename=/dev/nvme4n1 00:21:30.910 [job6] 00:21:30.910 filename=/dev/nvme5n1 00:21:30.910 [job7] 00:21:30.910 filename=/dev/nvme6n1 00:21:30.910 [job8] 00:21:30.910 filename=/dev/nvme7n1 00:21:30.910 [job9] 00:21:30.910 filename=/dev/nvme8n1 00:21:30.910 [job10] 00:21:30.910 filename=/dev/nvme9n1 00:21:30.910 Could not set queue depth (nvme0n1) 00:21:30.910 Could not set queue depth (nvme10n1) 00:21:30.910 Could not set queue depth (nvme1n1) 00:21:30.910 Could not set queue depth (nvme2n1) 00:21:30.910 Could not set queue depth (nvme3n1) 00:21:30.910 Could not set queue depth (nvme4n1) 00:21:30.910 Could not set queue depth (nvme5n1) 00:21:30.910 Could not set queue depth (nvme6n1) 00:21:30.910 Could not set queue depth (nvme7n1) 00:21:30.910 Could not set queue depth (nvme8n1) 00:21:30.910 Could not set queue depth (nvme9n1) 00:21:30.910 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.910 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.910 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.910 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.910 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.910 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.910 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.910 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.910 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.910 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.910 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.910 fio-3.35 00:21:30.910 Starting 11 threads 00:21:40.891 00:21:40.891 job0: (groupid=0, jobs=1): err= 0: pid=3817447: Fri Apr 26 15:02:25 2024 00:21:40.891 write: IOPS=503, BW=126MiB/s (132MB/s)(1264MiB/10038msec); 0 zone resets 00:21:40.891 slat (usec): min=18, max=123573, avg=1145.76, stdev=4053.78 00:21:40.891 clat (usec): min=924, max=396593, avg=125861.16, stdev=93739.03 00:21:40.891 lat (usec): min=965, max=400835, avg=127006.92, stdev=94640.39 00:21:40.891 clat percentiles (msec): 00:21:40.891 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 21], 20.00th=[ 39], 00:21:40.891 | 30.00th=[ 64], 40.00th=[ 86], 50.00th=[ 104], 60.00th=[ 124], 00:21:40.891 | 70.00th=[ 153], 80.00th=[ 224], 90.00th=[ 275], 95.00th=[ 305], 00:21:40.891 | 99.00th=[ 363], 99.50th=[ 376], 99.90th=[ 388], 99.95th=[ 393], 00:21:40.891 | 99.99th=[ 397] 00:21:40.891 bw ( KiB/s): min=52224, max=216576, per=9.32%, avg=127806.70, stdev=46583.69, samples=20 00:21:40.891 iops : min= 204, max= 846, avg=499.20, stdev=181.95, samples=20 00:21:40.891 lat (usec) : 1000=0.04% 00:21:40.891 lat (msec) : 2=0.14%, 4=0.30%, 10=3.09%, 20=6.11%, 50=15.94% 00:21:40.891 lat (msec) : 100=23.48%, 250=35.82%, 500=15.09% 00:21:40.891 cpu : usr=1.95%, sys=1.72%, ctx=3427, majf=0, minf=1 00:21:40.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:40.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.891 issued rwts: total=0,5056,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.891 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.891 job1: (groupid=0, jobs=1): err= 0: pid=3817448: Fri Apr 26 15:02:25 2024 00:21:40.891 write: IOPS=448, BW=112MiB/s (118MB/s)(1142MiB/10175msec); 0 zone resets 00:21:40.891 slat (usec): min=24, max=79172, avg=1598.10, stdev=4926.62 00:21:40.891 clat (usec): min=1545, max=404734, avg=140912.66, stdev=99862.67 00:21:40.891 lat (usec): min=1601, max=404791, avg=142510.75, stdev=101173.40 00:21:40.891 clat percentiles (msec): 00:21:40.891 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 29], 20.00th=[ 43], 00:21:40.891 | 30.00th=[ 59], 40.00th=[ 91], 50.00th=[ 124], 60.00th=[ 153], 00:21:40.891 | 70.00th=[ 197], 80.00th=[ 257], 90.00th=[ 292], 95.00th=[ 309], 00:21:40.891 | 99.00th=[ 338], 99.50th=[ 363], 99.90th=[ 388], 99.95th=[ 397], 00:21:40.891 | 99.99th=[ 405] 00:21:40.891 bw ( KiB/s): min=53248, max=300032, per=8.40%, avg=115232.50, stdev=67005.26, samples=20 00:21:40.891 iops : min= 208, max= 1172, avg=450.10, stdev=261.76, samples=20 00:21:40.891 lat (msec) : 2=0.04%, 4=0.37%, 10=1.95%, 20=3.83%, 50=20.41% 00:21:40.891 lat (msec) : 100=16.32%, 250=36.03%, 500=21.05% 00:21:40.891 cpu : usr=1.84%, sys=1.45%, ctx=2657, majf=0, minf=1 00:21:40.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:40.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.891 issued rwts: total=0,4566,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.891 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.891 job2: (groupid=0, jobs=1): err= 0: pid=3817449: Fri Apr 26 15:02:25 2024 00:21:40.891 write: IOPS=314, BW=78.7MiB/s (82.6MB/s)(801MiB/10176msec); 0 zone resets 00:21:40.891 slat (usec): min=32, max=83659, avg=2555.53, stdev=6355.06 00:21:40.891 clat (msec): min=3, max=418, avg=200.42, stdev=90.06 00:21:40.891 lat (msec): min=4, max=418, avg=202.98, stdev=91.33 00:21:40.891 clat percentiles (msec): 00:21:40.891 | 1.00th=[ 11], 5.00th=[ 50], 10.00th=[ 84], 20.00th=[ 104], 00:21:40.891 | 30.00th=[ 138], 40.00th=[ 190], 50.00th=[ 211], 60.00th=[ 243], 00:21:40.891 | 70.00th=[ 264], 80.00th=[ 279], 90.00th=[ 305], 95.00th=[ 342], 00:21:40.891 | 99.00th=[ 376], 99.50th=[ 388], 99.90th=[ 405], 99.95th=[ 418], 00:21:40.892 | 99.99th=[ 418] 00:21:40.892 bw ( KiB/s): min=51200, max=159744, per=5.87%, avg=80435.10, stdev=27136.54, samples=20 00:21:40.892 iops : min= 200, max= 624, avg=314.15, stdev=105.98, samples=20 00:21:40.892 lat (msec) : 4=0.03%, 10=0.84%, 20=2.00%, 50=2.25%, 100=13.76% 00:21:40.892 lat (msec) : 250=44.49%, 500=36.63% 00:21:40.892 cpu : usr=1.25%, sys=0.88%, ctx=1479, majf=0, minf=1 00:21:40.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:21:40.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.892 issued rwts: total=0,3205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.892 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.892 job3: (groupid=0, jobs=1): err= 0: pid=3817461: Fri Apr 26 15:02:25 2024 00:21:40.892 write: IOPS=638, BW=160MiB/s (167MB/s)(1617MiB/10125msec); 0 zone resets 00:21:40.892 slat (usec): min=18, max=80298, avg=768.24, stdev=3257.73 00:21:40.892 clat (usec): min=951, max=419677, avg=99360.40, stdev=85938.90 00:21:40.892 lat (usec): min=993, max=424821, avg=100128.64, stdev=86820.55 00:21:40.892 clat percentiles (msec): 00:21:40.892 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 13], 20.00th=[ 26], 00:21:40.892 | 30.00th=[ 43], 40.00th=[ 54], 50.00th=[ 75], 60.00th=[ 92], 00:21:40.892 | 70.00th=[ 122], 80.00th=[ 161], 90.00th=[ 249], 95.00th=[ 292], 00:21:40.892 | 99.00th=[ 321], 99.50th=[ 334], 99.90th=[ 414], 99.95th=[ 418], 00:21:40.892 | 99.99th=[ 422] 00:21:40.892 bw ( KiB/s): min=57344, max=262656, per=11.96%, avg=163954.50, stdev=63426.18, samples=20 00:21:40.892 iops : min= 224, max= 1026, avg=640.40, stdev=247.78, samples=20 00:21:40.892 lat (usec) : 1000=0.02% 00:21:40.892 lat (msec) : 2=0.25%, 4=1.13%, 10=6.32%, 20=8.13%, 50=19.42% 00:21:40.892 lat (msec) : 100=28.57%, 250=26.39%, 500=9.77% 00:21:40.892 cpu : usr=2.55%, sys=1.93%, ctx=4819, majf=0, minf=1 00:21:40.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:21:40.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.892 issued rwts: total=0,6468,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.892 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.892 job4: (groupid=0, jobs=1): err= 0: pid=3817462: Fri Apr 26 15:02:25 2024 00:21:40.892 write: IOPS=465, BW=116MiB/s (122MB/s)(1177MiB/10103msec); 0 zone resets 00:21:40.892 slat (usec): min=27, max=61225, avg=1584.13, stdev=4215.24 00:21:40.892 clat (usec): min=931, max=367904, avg=135711.29, stdev=80892.82 00:21:40.892 lat (usec): min=1023, max=382293, avg=137295.42, stdev=81929.67 00:21:40.892 clat percentiles (msec): 00:21:40.892 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 51], 20.00th=[ 83], 00:21:40.892 | 30.00th=[ 88], 40.00th=[ 96], 50.00th=[ 121], 60.00th=[ 129], 00:21:40.892 | 70.00th=[ 150], 80.00th=[ 197], 90.00th=[ 275], 95.00th=[ 313], 00:21:40.892 | 99.00th=[ 347], 99.50th=[ 351], 99.90th=[ 363], 99.95th=[ 363], 00:21:40.892 | 99.99th=[ 368] 00:21:40.892 bw ( KiB/s): min=50688, max=201216, per=8.67%, avg=118860.00, stdev=42004.76, samples=20 00:21:40.892 iops : min= 198, max= 786, avg=464.25, stdev=164.05, samples=20 00:21:40.892 lat (usec) : 1000=0.02% 00:21:40.892 lat (msec) : 2=0.11%, 4=0.51%, 10=0.59%, 20=2.04%, 50=6.78% 00:21:40.892 lat (msec) : 100=31.42%, 250=45.12%, 500=13.41% 00:21:40.892 cpu : usr=1.82%, sys=1.28%, ctx=2349, majf=0, minf=1 00:21:40.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:40.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.892 issued rwts: total=0,4707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.892 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.892 job5: (groupid=0, jobs=1): err= 0: pid=3817463: Fri Apr 26 15:02:25 2024 00:21:40.892 write: IOPS=632, BW=158MiB/s (166MB/s)(1603MiB/10139msec); 0 zone resets 00:21:40.892 slat (usec): min=24, max=55520, avg=1068.76, stdev=3001.38 00:21:40.892 clat (usec): min=1298, max=404410, avg=100060.63, stdev=76675.59 00:21:40.892 lat (usec): min=1395, max=404485, avg=101129.40, stdev=77336.84 00:21:40.892 clat percentiles (msec): 00:21:40.892 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 27], 20.00th=[ 44], 00:21:40.892 | 30.00th=[ 47], 40.00th=[ 62], 50.00th=[ 84], 60.00th=[ 92], 00:21:40.892 | 70.00th=[ 116], 80.00th=[ 142], 90.00th=[ 226], 95.00th=[ 268], 00:21:40.892 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 393], 99.95th=[ 401], 00:21:40.892 | 99.99th=[ 405] 00:21:40.892 bw ( KiB/s): min=63488, max=356864, per=11.85%, avg=162498.55, stdev=80589.33, samples=20 00:21:40.892 iops : min= 248, max= 1394, avg=634.75, stdev=314.80, samples=20 00:21:40.892 lat (msec) : 2=0.14%, 4=0.67%, 10=2.84%, 20=4.13%, 50=28.15% 00:21:40.892 lat (msec) : 100=28.21%, 250=28.79%, 500=7.06% 00:21:40.892 cpu : usr=2.39%, sys=1.84%, ctx=3336, majf=0, minf=1 00:21:40.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:21:40.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.892 issued rwts: total=0,6412,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.892 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.892 job6: (groupid=0, jobs=1): err= 0: pid=3817464: Fri Apr 26 15:02:25 2024 00:21:40.892 write: IOPS=769, BW=192MiB/s (202MB/s)(1949MiB/10133msec); 0 zone resets 00:21:40.892 slat (usec): min=20, max=45022, avg=869.10, stdev=2430.95 00:21:40.892 clat (usec): min=862, max=367016, avg=82273.00, stdev=61602.83 00:21:40.892 lat (usec): min=897, max=373237, avg=83142.10, stdev=62099.09 00:21:40.892 clat percentiles (msec): 00:21:40.892 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 21], 20.00th=[ 43], 00:21:40.892 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 59], 60.00th=[ 86], 00:21:40.892 | 70.00th=[ 103], 80.00th=[ 129], 90.00th=[ 155], 95.00th=[ 197], 00:21:40.892 | 99.00th=[ 330], 99.50th=[ 347], 99.90th=[ 363], 99.95th=[ 363], 00:21:40.892 | 99.99th=[ 368] 00:21:40.892 bw ( KiB/s): min=56832, max=357888, per=14.43%, avg=197889.65, stdev=85136.64, samples=20 00:21:40.892 iops : min= 222, max= 1398, avg=772.95, stdev=332.52, samples=20 00:21:40.892 lat (usec) : 1000=0.04% 00:21:40.892 lat (msec) : 2=0.40%, 4=1.19%, 10=3.64%, 20=4.76%, 50=34.59% 00:21:40.892 lat (msec) : 100=24.77%, 250=28.17%, 500=2.44% 00:21:40.892 cpu : usr=2.70%, sys=2.40%, ctx=3975, majf=0, minf=1 00:21:40.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:40.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.892 issued rwts: total=0,7795,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.892 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.892 job7: (groupid=0, jobs=1): err= 0: pid=3817465: Fri Apr 26 15:02:25 2024 00:21:40.892 write: IOPS=377, BW=94.5MiB/s (99.1MB/s)(957MiB/10131msec); 0 zone resets 00:21:40.892 slat (usec): min=30, max=185187, avg=2057.50, stdev=6179.15 00:21:40.892 clat (usec): min=1579, max=487007, avg=167002.00, stdev=100240.62 00:21:40.892 lat (usec): min=1652, max=487048, avg=169059.50, stdev=101537.17 00:21:40.892 clat percentiles (msec): 00:21:40.892 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 28], 20.00th=[ 68], 00:21:40.892 | 30.00th=[ 97], 40.00th=[ 131], 50.00th=[ 157], 60.00th=[ 207], 00:21:40.892 | 70.00th=[ 239], 80.00th=[ 271], 90.00th=[ 300], 95.00th=[ 317], 00:21:40.892 | 99.00th=[ 368], 99.50th=[ 380], 99.90th=[ 464], 99.95th=[ 489], 00:21:40.892 | 99.99th=[ 489] 00:21:40.892 bw ( KiB/s): min=53248, max=172032, per=7.03%, avg=96410.20, stdev=38005.60, samples=20 00:21:40.892 iops : min= 208, max= 672, avg=376.55, stdev=148.43, samples=20 00:21:40.892 lat (msec) : 2=0.03%, 4=0.50%, 10=3.66%, 20=3.79%, 50=7.89% 00:21:40.892 lat (msec) : 100=14.91%, 250=42.28%, 500=26.95% 00:21:40.892 cpu : usr=1.58%, sys=1.01%, ctx=1993, majf=0, minf=1 00:21:40.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:21:40.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.892 issued rwts: total=0,3829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.892 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.892 job8: (groupid=0, jobs=1): err= 0: pid=3817466: Fri Apr 26 15:02:25 2024 00:21:40.892 write: IOPS=293, BW=73.3MiB/s (76.9MB/s)(746MiB/10176msec); 0 zone resets 00:21:40.892 slat (usec): min=17, max=58951, avg=2545.13, stdev=6030.76 00:21:40.892 clat (usec): min=915, max=428946, avg=215490.43, stdev=85144.22 00:21:40.892 lat (usec): min=959, max=429010, avg=218035.56, stdev=86288.86 00:21:40.892 clat percentiles (usec): 00:21:40.892 | 1.00th=[ 1860], 5.00th=[ 22938], 10.00th=[ 82314], 20.00th=[168821], 00:21:40.892 | 30.00th=[198181], 40.00th=[217056], 50.00th=[233833], 60.00th=[246416], 00:21:40.892 | 70.00th=[261096], 80.00th=[278922], 90.00th=[299893], 95.00th=[329253], 00:21:40.892 | 99.00th=[383779], 99.50th=[396362], 99.90th=[404751], 99.95th=[425722], 00:21:40.892 | 99.99th=[429917] 00:21:40.892 bw ( KiB/s): min=47104, max=139264, per=5.46%, avg=74798.25, stdev=22842.80, samples=20 00:21:40.892 iops : min= 184, max= 544, avg=292.15, stdev=89.27, samples=20 00:21:40.892 lat (usec) : 1000=0.10% 00:21:40.892 lat (msec) : 2=0.94%, 4=1.14%, 10=1.34%, 20=1.24%, 50=3.58% 00:21:40.892 lat (msec) : 100=4.15%, 250=50.12%, 500=37.39% 00:21:40.892 cpu : usr=1.17%, sys=0.96%, ctx=1483, majf=0, minf=1 00:21:40.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:21:40.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.892 issued rwts: total=0,2985,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.892 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.892 job9: (groupid=0, jobs=1): err= 0: pid=3817467: Fri Apr 26 15:02:25 2024 00:21:40.892 write: IOPS=526, BW=132MiB/s (138MB/s)(1330MiB/10098msec); 0 zone resets 00:21:40.892 slat (usec): min=25, max=122459, avg=892.20, stdev=3945.68 00:21:40.892 clat (usec): min=1153, max=535944, avg=120384.38, stdev=92153.80 00:21:40.892 lat (usec): min=1228, max=542596, avg=121276.58, stdev=92972.27 00:21:40.892 clat percentiles (msec): 00:21:40.892 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 22], 20.00th=[ 40], 00:21:40.892 | 30.00th=[ 51], 40.00th=[ 74], 50.00th=[ 93], 60.00th=[ 124], 00:21:40.892 | 70.00th=[ 161], 80.00th=[ 215], 90.00th=[ 257], 95.00th=[ 284], 00:21:40.892 | 99.00th=[ 376], 99.50th=[ 443], 99.90th=[ 527], 99.95th=[ 531], 00:21:40.893 | 99.99th=[ 535] 00:21:40.893 bw ( KiB/s): min=55296, max=305152, per=9.81%, avg=134556.30, stdev=54301.72, samples=20 00:21:40.893 iops : min= 216, max= 1192, avg=525.55, stdev=212.13, samples=20 00:21:40.893 lat (msec) : 2=0.13%, 4=0.39%, 10=2.91%, 20=5.38%, 50=20.83% 00:21:40.893 lat (msec) : 100=23.40%, 250=35.21%, 500=11.58%, 750=0.17% 00:21:40.893 cpu : usr=1.89%, sys=1.90%, ctx=3854, majf=0, minf=1 00:21:40.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:40.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.893 issued rwts: total=0,5320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.893 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.893 job10: (groupid=0, jobs=1): err= 0: pid=3817468: Fri Apr 26 15:02:25 2024 00:21:40.893 write: IOPS=410, BW=103MiB/s (108MB/s)(1039MiB/10124msec); 0 zone resets 00:21:40.893 slat (usec): min=25, max=116266, avg=1580.67, stdev=5406.12 00:21:40.893 clat (usec): min=1366, max=435847, avg=154252.45, stdev=106341.74 00:21:40.893 lat (usec): min=1408, max=435890, avg=155833.12, stdev=107530.78 00:21:40.893 clat percentiles (msec): 00:21:40.893 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 19], 20.00th=[ 36], 00:21:40.893 | 30.00th=[ 79], 40.00th=[ 118], 50.00th=[ 134], 60.00th=[ 180], 00:21:40.893 | 70.00th=[ 224], 80.00th=[ 266], 90.00th=[ 300], 95.00th=[ 334], 00:21:40.893 | 99.00th=[ 376], 99.50th=[ 393], 99.90th=[ 418], 99.95th=[ 418], 00:21:40.893 | 99.99th=[ 435] 00:21:40.893 bw ( KiB/s): min=55296, max=187392, per=7.64%, avg=104745.50, stdev=42790.44, samples=20 00:21:40.893 iops : min= 216, max= 732, avg=409.10, stdev=167.12, samples=20 00:21:40.893 lat (msec) : 2=0.12%, 4=0.46%, 10=3.15%, 20=7.63%, 50=12.23% 00:21:40.893 lat (msec) : 100=10.16%, 250=42.72%, 500=23.54% 00:21:40.893 cpu : usr=1.47%, sys=1.34%, ctx=2702, majf=0, minf=1 00:21:40.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:21:40.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.893 issued rwts: total=0,4155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.893 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.893 00:21:40.893 Run status group 0 (all jobs): 00:21:40.893 WRITE: bw=1339MiB/s (1404MB/s), 73.3MiB/s-192MiB/s (76.9MB/s-202MB/s), io=13.3GiB (14.3GB), run=10038-10176msec 00:21:40.893 00:21:40.893 Disk stats (read/write): 00:21:40.893 nvme0n1: ios=49/9795, merge=0/0, ticks=44/1219193, in_queue=1219237, util=97.11% 00:21:40.893 nvme10n1: ios=49/9092, merge=0/0, ticks=208/1241917, in_queue=1242125, util=98.55% 00:21:40.893 nvme1n1: ios=44/6387, merge=0/0, ticks=1421/1235096, in_queue=1236517, util=99.75% 00:21:40.893 nvme2n1: ios=42/12654, merge=0/0, ticks=29/1230860, in_queue=1230889, util=97.68% 00:21:40.893 nvme3n1: ios=20/9155, merge=0/0, ticks=188/1204439, in_queue=1204627, util=98.01% 00:21:40.893 nvme4n1: ios=50/12620, merge=0/0, ticks=1007/1217016, in_queue=1218023, util=99.78% 00:21:40.893 nvme5n1: ios=0/15393, merge=0/0, ticks=0/1217487, in_queue=1217487, util=98.10% 00:21:40.893 nvme6n1: ios=45/7453, merge=0/0, ticks=1114/1207434, in_queue=1208548, util=99.84% 00:21:40.893 nvme7n1: ios=26/5950, merge=0/0, ticks=72/1243685, in_queue=1243757, util=99.00% 00:21:40.893 nvme8n1: ios=36/10384, merge=0/0, ticks=1483/1217654, in_queue=1219137, util=100.00% 00:21:40.893 nvme9n1: ios=0/8035, merge=0/0, ticks=0/1221666, in_queue=1221666, util=98.96% 00:21:40.893 15:02:25 -- target/multiconnection.sh@36 -- # sync 00:21:40.893 15:02:25 -- target/multiconnection.sh@37 -- # seq 1 11 00:21:40.893 15:02:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:40.893 15:02:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:40.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:40.893 15:02:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:21:40.893 15:02:25 -- common/autotest_common.sh@1205 -- # local i=0 00:21:40.893 15:02:25 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:40.893 15:02:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:21:40.893 15:02:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:40.893 15:02:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK1 00:21:40.893 15:02:26 -- common/autotest_common.sh@1217 -- # return 0 00:21:40.893 15:02:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:40.893 15:02:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.893 15:02:26 -- common/autotest_common.sh@10 -- # set +x 00:21:40.893 15:02:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.893 15:02:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:40.893 15:02:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:40.893 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:40.893 15:02:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:21:40.893 15:02:26 -- common/autotest_common.sh@1205 -- # local i=0 00:21:40.893 15:02:26 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:40.893 15:02:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:21:40.893 15:02:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:40.893 15:02:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK2 00:21:40.893 15:02:26 -- common/autotest_common.sh@1217 -- # return 0 00:21:40.893 15:02:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:40.893 15:02:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.893 15:02:26 -- common/autotest_common.sh@10 -- # set +x 00:21:40.893 15:02:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.893 15:02:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:40.893 15:02:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:41.151 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:41.151 15:02:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:21:41.151 15:02:26 -- common/autotest_common.sh@1205 -- # local i=0 00:21:41.151 15:02:26 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:41.151 15:02:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:21:41.151 15:02:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:41.151 15:02:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK3 00:21:41.151 15:02:26 -- common/autotest_common.sh@1217 -- # return 0 00:21:41.151 15:02:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:41.151 15:02:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.151 15:02:26 -- common/autotest_common.sh@10 -- # set +x 00:21:41.151 15:02:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.151 15:02:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.151 15:02:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:41.409 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:41.409 15:02:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:21:41.409 15:02:26 -- common/autotest_common.sh@1205 -- # local i=0 00:21:41.409 15:02:26 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:41.409 15:02:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:21:41.409 15:02:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:41.409 15:02:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK4 00:21:41.409 15:02:26 -- common/autotest_common.sh@1217 -- # return 0 00:21:41.409 15:02:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:41.409 15:02:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.409 15:02:26 -- common/autotest_common.sh@10 -- # set +x 00:21:41.409 15:02:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.409 15:02:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.409 15:02:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:41.669 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:41.669 15:02:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:21:41.669 15:02:27 -- common/autotest_common.sh@1205 -- # local i=0 00:21:41.669 15:02:27 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:41.669 15:02:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:21:41.669 15:02:27 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:41.669 15:02:27 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK5 00:21:41.670 15:02:27 -- common/autotest_common.sh@1217 -- # return 0 00:21:41.670 15:02:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:41.670 15:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.670 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:21:41.670 15:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.670 15:02:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.670 15:02:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:21:41.929 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:21:41.929 15:02:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:21:41.929 15:02:27 -- common/autotest_common.sh@1205 -- # local i=0 00:21:41.929 15:02:27 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:41.929 15:02:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:21:41.929 15:02:27 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:41.929 15:02:27 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK6 00:21:41.929 15:02:27 -- common/autotest_common.sh@1217 -- # return 0 00:21:41.929 15:02:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:21:41.929 15:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.929 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:21:41.929 15:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.929 15:02:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.929 15:02:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:21:41.929 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:21:41.929 15:02:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:21:41.929 15:02:27 -- common/autotest_common.sh@1205 -- # local i=0 00:21:41.929 15:02:27 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:41.929 15:02:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:21:41.929 15:02:27 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:41.929 15:02:27 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK7 00:21:41.929 15:02:27 -- common/autotest_common.sh@1217 -- # return 0 00:21:41.929 15:02:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:21:41.929 15:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.929 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:21:41.929 15:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.929 15:02:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.929 15:02:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:21:42.188 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:21:42.188 15:02:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:21:42.188 15:02:27 -- common/autotest_common.sh@1205 -- # local i=0 00:21:42.188 15:02:27 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:42.188 15:02:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:21:42.188 15:02:27 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:42.188 15:02:27 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK8 00:21:42.188 15:02:27 -- common/autotest_common.sh@1217 -- # return 0 00:21:42.189 15:02:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:21:42.189 15:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.189 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:21:42.189 15:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.189 15:02:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:42.189 15:02:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:21:42.189 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:21:42.189 15:02:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:21:42.189 15:02:27 -- common/autotest_common.sh@1205 -- # local i=0 00:21:42.189 15:02:27 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:42.189 15:02:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:21:42.189 15:02:27 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:42.189 15:02:27 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK9 00:21:42.189 15:02:27 -- common/autotest_common.sh@1217 -- # return 0 00:21:42.189 15:02:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:21:42.189 15:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.189 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:21:42.447 15:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.447 15:02:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:42.447 15:02:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:21:42.447 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:21:42.447 15:02:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:21:42.447 15:02:28 -- common/autotest_common.sh@1205 -- # local i=0 00:21:42.447 15:02:28 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:42.447 15:02:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:21:42.447 15:02:28 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:42.447 15:02:28 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK10 00:21:42.447 15:02:28 -- common/autotest_common.sh@1217 -- # return 0 00:21:42.447 15:02:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:21:42.447 15:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.447 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:21:42.447 15:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.447 15:02:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:42.447 15:02:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:21:42.447 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:21:42.447 15:02:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:21:42.447 15:02:28 -- common/autotest_common.sh@1205 -- # local i=0 00:21:42.447 15:02:28 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:42.447 15:02:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:21:42.447 15:02:28 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:42.447 15:02:28 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK11 00:21:42.447 15:02:28 -- common/autotest_common.sh@1217 -- # return 0 00:21:42.447 15:02:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:21:42.447 15:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.447 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:21:42.447 15:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.447 15:02:28 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:21:42.447 15:02:28 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:42.447 15:02:28 -- target/multiconnection.sh@47 -- # nvmftestfini 00:21:42.447 15:02:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:42.447 15:02:28 -- nvmf/common.sh@117 -- # sync 00:21:42.447 15:02:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:42.447 15:02:28 -- nvmf/common.sh@120 -- # set +e 00:21:42.447 15:02:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:42.447 15:02:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:42.447 rmmod nvme_tcp 00:21:42.705 rmmod nvme_fabrics 00:21:42.705 rmmod nvme_keyring 00:21:42.705 15:02:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:42.705 15:02:28 -- nvmf/common.sh@124 -- # set -e 00:21:42.705 15:02:28 -- nvmf/common.sh@125 -- # return 0 00:21:42.705 15:02:28 -- nvmf/common.sh@478 -- # '[' -n 3812016 ']' 00:21:42.705 15:02:28 -- nvmf/common.sh@479 -- # killprocess 3812016 00:21:42.705 15:02:28 -- common/autotest_common.sh@936 -- # '[' -z 3812016 ']' 00:21:42.705 15:02:28 -- common/autotest_common.sh@940 -- # kill -0 3812016 00:21:42.705 15:02:28 -- common/autotest_common.sh@941 -- # uname 00:21:42.705 15:02:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:42.705 15:02:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3812016 00:21:42.705 15:02:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:42.705 15:02:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:42.705 15:02:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3812016' 00:21:42.705 killing process with pid 3812016 00:21:42.705 15:02:28 -- common/autotest_common.sh@955 -- # kill 3812016 00:21:42.705 15:02:28 -- common/autotest_common.sh@960 -- # wait 3812016 00:21:43.271 15:02:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:43.271 15:02:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:43.271 15:02:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:43.271 15:02:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.271 15:02:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:43.271 15:02:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.271 15:02:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.271 15:02:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.209 15:02:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:45.209 00:21:45.209 real 1m0.725s 00:21:45.209 user 3m30.283s 00:21:45.209 sys 0m23.381s 00:21:45.209 15:02:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:45.209 15:02:30 -- common/autotest_common.sh@10 -- # set +x 00:21:45.209 ************************************ 00:21:45.209 END TEST nvmf_multiconnection 00:21:45.209 ************************************ 00:21:45.209 15:02:30 -- nvmf/nvmf.sh@67 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:45.209 15:02:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:45.209 15:02:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:45.209 15:02:30 -- common/autotest_common.sh@10 -- # set +x 00:21:45.209 ************************************ 00:21:45.209 START TEST nvmf_initiator_timeout 00:21:45.209 ************************************ 00:21:45.209 15:02:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:45.467 * Looking for test storage... 00:21:45.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:45.467 15:02:30 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.467 15:02:30 -- nvmf/common.sh@7 -- # uname -s 00:21:45.467 15:02:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.467 15:02:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.467 15:02:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.467 15:02:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.467 15:02:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.467 15:02:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.467 15:02:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.467 15:02:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.467 15:02:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.467 15:02:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.467 15:02:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:45.467 15:02:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:45.467 15:02:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.467 15:02:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.467 15:02:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.467 15:02:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.467 15:02:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.467 15:02:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.467 15:02:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.467 15:02:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.467 15:02:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.467 15:02:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.468 15:02:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.468 15:02:31 -- paths/export.sh@5 -- # export PATH 00:21:45.468 15:02:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.468 15:02:31 -- nvmf/common.sh@47 -- # : 0 00:21:45.468 15:02:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:45.468 15:02:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:45.468 15:02:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.468 15:02:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.468 15:02:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.468 15:02:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:45.468 15:02:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:45.468 15:02:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:45.468 15:02:31 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:45.468 15:02:31 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:45.468 15:02:31 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:21:45.468 15:02:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:45.468 15:02:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.468 15:02:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:45.468 15:02:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:45.468 15:02:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:45.468 15:02:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.468 15:02:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.468 15:02:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.468 15:02:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:45.468 15:02:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:45.468 15:02:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:45.468 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:21:47.375 15:02:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:47.375 15:02:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:47.375 15:02:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:47.375 15:02:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:47.375 15:02:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:47.375 15:02:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:47.375 15:02:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:47.375 15:02:32 -- nvmf/common.sh@295 -- # net_devs=() 00:21:47.375 15:02:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:47.375 15:02:32 -- nvmf/common.sh@296 -- # e810=() 00:21:47.375 15:02:32 -- nvmf/common.sh@296 -- # local -ga e810 00:21:47.375 15:02:32 -- nvmf/common.sh@297 -- # x722=() 00:21:47.375 15:02:32 -- nvmf/common.sh@297 -- # local -ga x722 00:21:47.375 15:02:32 -- nvmf/common.sh@298 -- # mlx=() 00:21:47.375 15:02:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:47.375 15:02:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.375 15:02:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.375 15:02:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.375 15:02:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.375 15:02:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.375 15:02:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.375 15:02:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.375 15:02:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.375 15:02:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.375 15:02:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.375 15:02:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.375 15:02:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:47.375 15:02:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:47.375 15:02:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:47.375 15:02:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:47.375 15:02:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:47.375 15:02:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:47.375 15:02:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.375 15:02:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:47.375 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:47.375 15:02:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.375 15:02:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.375 15:02:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.375 15:02:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.375 15:02:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.375 15:02:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.375 15:02:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:47.375 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:47.375 15:02:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.375 15:02:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.375 15:02:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.375 15:02:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.375 15:02:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.375 15:02:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:47.375 15:02:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:47.375 15:02:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:47.375 15:02:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.375 15:02:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.375 15:02:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:47.375 15:02:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.375 15:02:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:47.375 Found net devices under 0000:84:00.0: cvl_0_0 00:21:47.375 15:02:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.375 15:02:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.375 15:02:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.375 15:02:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:47.375 15:02:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.375 15:02:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:47.375 Found net devices under 0000:84:00.1: cvl_0_1 00:21:47.375 15:02:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.375 15:02:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:47.375 15:02:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:47.375 15:02:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:47.375 15:02:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:47.375 15:02:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:47.375 15:02:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.375 15:02:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.375 15:02:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.375 15:02:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:47.375 15:02:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.375 15:02:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.375 15:02:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:47.375 15:02:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.375 15:02:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.375 15:02:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:47.375 15:02:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:47.375 15:02:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.375 15:02:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.375 15:02:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.375 15:02:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.375 15:02:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:47.375 15:02:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.375 15:02:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.375 15:02:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.375 15:02:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:47.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:21:47.375 00:21:47.375 --- 10.0.0.2 ping statistics --- 00:21:47.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.375 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:21:47.375 15:02:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:21:47.375 00:21:47.375 --- 10.0.0.1 ping statistics --- 00:21:47.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.375 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:21:47.375 15:02:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.635 15:02:33 -- nvmf/common.sh@411 -- # return 0 00:21:47.635 15:02:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:47.635 15:02:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.635 15:02:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:47.635 15:02:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:47.635 15:02:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.635 15:02:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:47.635 15:02:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:47.635 15:02:33 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:21:47.635 15:02:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:47.635 15:02:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:47.635 15:02:33 -- common/autotest_common.sh@10 -- # set +x 00:21:47.635 15:02:33 -- nvmf/common.sh@470 -- # nvmfpid=3820825 00:21:47.635 15:02:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:47.635 15:02:33 -- nvmf/common.sh@471 -- # waitforlisten 3820825 00:21:47.635 15:02:33 -- common/autotest_common.sh@817 -- # '[' -z 3820825 ']' 00:21:47.635 15:02:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.635 15:02:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:47.635 15:02:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.635 15:02:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:47.635 15:02:33 -- common/autotest_common.sh@10 -- # set +x 00:21:47.635 [2024-04-26 15:02:33.181538] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:21:47.635 [2024-04-26 15:02:33.181615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.635 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.635 [2024-04-26 15:02:33.220915] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:47.635 [2024-04-26 15:02:33.248003] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:47.635 [2024-04-26 15:02:33.333802] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.635 [2024-04-26 15:02:33.333874] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.635 [2024-04-26 15:02:33.333889] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.635 [2024-04-26 15:02:33.333901] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.635 [2024-04-26 15:02:33.333911] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.635 [2024-04-26 15:02:33.333976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.635 [2024-04-26 15:02:33.334043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.635 [2024-04-26 15:02:33.334104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.635 [2024-04-26 15:02:33.334106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.894 15:02:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:47.894 15:02:33 -- common/autotest_common.sh@850 -- # return 0 00:21:47.894 15:02:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:47.894 15:02:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:47.894 15:02:33 -- common/autotest_common.sh@10 -- # set +x 00:21:47.894 15:02:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.894 15:02:33 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:47.894 15:02:33 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:47.894 15:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.894 15:02:33 -- common/autotest_common.sh@10 -- # set +x 00:21:47.894 Malloc0 00:21:47.894 15:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.894 15:02:33 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:21:47.894 15:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.894 15:02:33 -- common/autotest_common.sh@10 -- # set +x 00:21:47.894 Delay0 00:21:47.894 15:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.894 15:02:33 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:47.894 15:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.894 15:02:33 -- common/autotest_common.sh@10 -- # set +x 00:21:47.894 [2024-04-26 15:02:33.522758] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.894 15:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.894 15:02:33 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:47.894 15:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.894 15:02:33 -- common/autotest_common.sh@10 -- # set +x 00:21:47.894 15:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.894 15:02:33 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:47.894 15:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.894 15:02:33 -- common/autotest_common.sh@10 -- # set +x 00:21:47.894 15:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.894 15:02:33 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:47.894 15:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.894 15:02:33 -- common/autotest_common.sh@10 -- # set +x 00:21:47.894 [2024-04-26 15:02:33.551075] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.894 15:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.894 15:02:33 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:48.458 15:02:34 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:21:48.458 15:02:34 -- common/autotest_common.sh@1184 -- # local i=0 00:21:48.458 15:02:34 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:48.458 15:02:34 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:48.458 15:02:34 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:50.993 15:02:36 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:50.993 15:02:36 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:50.993 15:02:36 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:50.993 15:02:36 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:50.993 15:02:36 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:50.993 15:02:36 -- common/autotest_common.sh@1194 -- # return 0 00:21:50.993 15:02:36 -- target/initiator_timeout.sh@35 -- # fio_pid=3821251 00:21:50.993 15:02:36 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:21:50.993 15:02:36 -- target/initiator_timeout.sh@37 -- # sleep 3 00:21:50.993 [global] 00:21:50.993 thread=1 00:21:50.993 invalidate=1 00:21:50.993 rw=write 00:21:50.993 time_based=1 00:21:50.993 runtime=60 00:21:50.993 ioengine=libaio 00:21:50.993 direct=1 00:21:50.993 bs=4096 00:21:50.993 iodepth=1 00:21:50.993 norandommap=0 00:21:50.993 numjobs=1 00:21:50.993 00:21:50.993 verify_dump=1 00:21:50.993 verify_backlog=512 00:21:50.993 verify_state_save=0 00:21:50.993 do_verify=1 00:21:50.993 verify=crc32c-intel 00:21:50.993 [job0] 00:21:50.993 filename=/dev/nvme0n1 00:21:50.993 Could not set queue depth (nvme0n1) 00:21:50.993 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:50.993 fio-3.35 00:21:50.993 Starting 1 thread 00:21:53.522 15:02:39 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:21:53.522 15:02:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.522 15:02:39 -- common/autotest_common.sh@10 -- # set +x 00:21:53.522 true 00:21:53.522 15:02:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.522 15:02:39 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:21:53.522 15:02:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.522 15:02:39 -- common/autotest_common.sh@10 -- # set +x 00:21:53.522 true 00:21:53.522 15:02:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.522 15:02:39 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:21:53.522 15:02:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.522 15:02:39 -- common/autotest_common.sh@10 -- # set +x 00:21:53.522 true 00:21:53.522 15:02:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.522 15:02:39 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:21:53.522 15:02:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.522 15:02:39 -- common/autotest_common.sh@10 -- # set +x 00:21:53.522 true 00:21:53.522 15:02:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.522 15:02:39 -- target/initiator_timeout.sh@45 -- # sleep 3 00:21:56.814 15:02:42 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:21:56.814 15:02:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.814 15:02:42 -- common/autotest_common.sh@10 -- # set +x 00:21:56.814 true 00:21:56.814 15:02:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.814 15:02:42 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:21:56.814 15:02:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.814 15:02:42 -- common/autotest_common.sh@10 -- # set +x 00:21:56.814 true 00:21:56.814 15:02:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.814 15:02:42 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:21:56.814 15:02:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.814 15:02:42 -- common/autotest_common.sh@10 -- # set +x 00:21:56.814 true 00:21:56.814 15:02:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.814 15:02:42 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:21:56.814 15:02:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.814 15:02:42 -- common/autotest_common.sh@10 -- # set +x 00:21:56.814 true 00:21:56.814 15:02:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.814 15:02:42 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:21:56.814 15:02:42 -- target/initiator_timeout.sh@54 -- # wait 3821251 00:22:53.075 00:22:53.075 job0: (groupid=0, jobs=1): err= 0: pid=3821320: Fri Apr 26 15:03:36 2024 00:22:53.075 read: IOPS=7, BW=31.2KiB/s (31.9kB/s)(1872KiB/60023msec) 00:22:53.075 slat (usec): min=8, max=10796, avg=43.91, stdev=498.20 00:22:53.075 clat (usec): min=324, max=40774k, avg=127889.55, stdev=1882907.89 00:22:53.075 lat (usec): min=340, max=40774k, avg=127933.47, stdev=1882906.57 00:22:53.075 clat percentiles (msec): 00:22:53.075 | 1.00th=[ 41], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 42], 00:22:53.075 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 42], 00:22:53.075 | 70.00th=[ 42], 80.00th=[ 42], 90.00th=[ 43], 95.00th=[ 43], 00:22:53.075 | 99.00th=[ 43], 99.50th=[ 43], 99.90th=[17113], 99.95th=[17113], 00:22:53.075 | 99.99th=[17113] 00:22:53.075 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60023msec); 0 zone resets 00:22:53.075 slat (usec): min=8, max=1229, avg=22.54, stdev=73.97 00:22:53.075 clat (usec): min=178, max=481, avg=258.74, stdev=60.62 00:22:53.075 lat (usec): min=187, max=1525, avg=281.28, stdev=102.45 00:22:53.075 clat percentiles (usec): 00:22:53.075 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:22:53.075 | 30.00th=[ 215], 40.00th=[ 229], 50.00th=[ 241], 60.00th=[ 260], 00:22:53.075 | 70.00th=[ 281], 80.00th=[ 310], 90.00th=[ 359], 95.00th=[ 383], 00:22:53.075 | 99.00th=[ 412], 99.50th=[ 424], 99.90th=[ 482], 99.95th=[ 482], 00:22:53.075 | 99.99th=[ 482] 00:22:53.075 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:22:53.075 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:22:53.075 lat (usec) : 250=29.29%, 500=23.27%, 750=0.10% 00:22:53.075 lat (msec) : 50=47.24%, >=2000=0.10% 00:22:53.075 cpu : usr=0.01%, sys=0.04%, ctx=984, majf=0, minf=2 00:22:53.075 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:53.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.075 issued rwts: total=468,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.075 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:53.075 00:22:53.075 Run status group 0 (all jobs): 00:22:53.075 READ: bw=31.2KiB/s (31.9kB/s), 31.2KiB/s-31.2KiB/s (31.9kB/s-31.9kB/s), io=1872KiB (1917kB), run=60023-60023msec 00:22:53.075 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60023-60023msec 00:22:53.075 00:22:53.075 Disk stats (read/write): 00:22:53.075 nvme0n1: ios=529/512, merge=0/0, ticks=19147/124, in_queue=19271, util=99.97% 00:22:53.075 15:03:36 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:53.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:53.075 15:03:36 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:53.075 15:03:36 -- common/autotest_common.sh@1205 -- # local i=0 00:22:53.075 15:03:36 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:22:53.075 15:03:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:53.075 15:03:36 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:22:53.075 15:03:36 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:53.075 15:03:36 -- common/autotest_common.sh@1217 -- # return 0 00:22:53.075 15:03:36 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:22:53.075 15:03:36 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:22:53.075 nvmf hotplug test: fio successful as expected 00:22:53.075 15:03:36 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:53.075 15:03:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.075 15:03:36 -- common/autotest_common.sh@10 -- # set +x 00:22:53.075 15:03:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.075 15:03:36 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:22:53.075 15:03:36 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:22:53.075 15:03:36 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:22:53.075 15:03:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:53.075 15:03:36 -- nvmf/common.sh@117 -- # sync 00:22:53.075 15:03:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:53.075 15:03:36 -- nvmf/common.sh@120 -- # set +e 00:22:53.075 15:03:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:53.075 15:03:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:53.075 rmmod nvme_tcp 00:22:53.075 rmmod nvme_fabrics 00:22:53.075 rmmod nvme_keyring 00:22:53.075 15:03:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:53.075 15:03:36 -- nvmf/common.sh@124 -- # set -e 00:22:53.075 15:03:36 -- nvmf/common.sh@125 -- # return 0 00:22:53.075 15:03:36 -- nvmf/common.sh@478 -- # '[' -n 3820825 ']' 00:22:53.075 15:03:36 -- nvmf/common.sh@479 -- # killprocess 3820825 00:22:53.075 15:03:36 -- common/autotest_common.sh@936 -- # '[' -z 3820825 ']' 00:22:53.075 15:03:36 -- common/autotest_common.sh@940 -- # kill -0 3820825 00:22:53.075 15:03:36 -- common/autotest_common.sh@941 -- # uname 00:22:53.075 15:03:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:53.075 15:03:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3820825 00:22:53.075 15:03:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:53.075 15:03:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:53.075 15:03:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3820825' 00:22:53.075 killing process with pid 3820825 00:22:53.075 15:03:36 -- common/autotest_common.sh@955 -- # kill 3820825 00:22:53.075 15:03:36 -- common/autotest_common.sh@960 -- # wait 3820825 00:22:53.075 15:03:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:53.075 15:03:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:53.075 15:03:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:53.075 15:03:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.075 15:03:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:53.075 15:03:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.075 15:03:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.075 15:03:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.640 15:03:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:53.640 00:22:53.641 real 1m8.168s 00:22:53.641 user 4m10.930s 00:22:53.641 sys 0m6.391s 00:22:53.641 15:03:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:53.641 15:03:39 -- common/autotest_common.sh@10 -- # set +x 00:22:53.641 ************************************ 00:22:53.641 END TEST nvmf_initiator_timeout 00:22:53.641 ************************************ 00:22:53.641 15:03:39 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:22:53.641 15:03:39 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:22:53.641 15:03:39 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:22:53.641 15:03:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:53.641 15:03:39 -- common/autotest_common.sh@10 -- # set +x 00:22:55.543 15:03:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:55.543 15:03:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:55.543 15:03:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:55.543 15:03:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:55.543 15:03:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:55.543 15:03:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:55.543 15:03:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:55.543 15:03:41 -- nvmf/common.sh@295 -- # net_devs=() 00:22:55.543 15:03:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:55.543 15:03:41 -- nvmf/common.sh@296 -- # e810=() 00:22:55.543 15:03:41 -- nvmf/common.sh@296 -- # local -ga e810 00:22:55.543 15:03:41 -- nvmf/common.sh@297 -- # x722=() 00:22:55.543 15:03:41 -- nvmf/common.sh@297 -- # local -ga x722 00:22:55.543 15:03:41 -- nvmf/common.sh@298 -- # mlx=() 00:22:55.543 15:03:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:55.543 15:03:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.543 15:03:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.543 15:03:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.543 15:03:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.543 15:03:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.543 15:03:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.543 15:03:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.543 15:03:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.543 15:03:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.543 15:03:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.543 15:03:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.543 15:03:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:55.543 15:03:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:55.543 15:03:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:55.543 15:03:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:55.543 15:03:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:55.543 15:03:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:55.543 15:03:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.543 15:03:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:55.543 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:55.543 15:03:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.543 15:03:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.543 15:03:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.543 15:03:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.543 15:03:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.543 15:03:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.543 15:03:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:55.543 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:55.543 15:03:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.543 15:03:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.543 15:03:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.543 15:03:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.543 15:03:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.543 15:03:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:55.543 15:03:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:55.543 15:03:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:55.543 15:03:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.543 15:03:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.543 15:03:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:55.543 15:03:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.543 15:03:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:55.543 Found net devices under 0000:84:00.0: cvl_0_0 00:22:55.543 15:03:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.543 15:03:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.543 15:03:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.543 15:03:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:55.543 15:03:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.543 15:03:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:55.543 Found net devices under 0000:84:00.1: cvl_0_1 00:22:55.543 15:03:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.543 15:03:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:55.543 15:03:41 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.543 15:03:41 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:22:55.543 15:03:41 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:55.543 15:03:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:55.543 15:03:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:55.543 15:03:41 -- common/autotest_common.sh@10 -- # set +x 00:22:55.543 ************************************ 00:22:55.543 START TEST nvmf_perf_adq 00:22:55.543 ************************************ 00:22:55.543 15:03:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:55.543 * Looking for test storage... 00:22:55.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:55.543 15:03:41 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.543 15:03:41 -- nvmf/common.sh@7 -- # uname -s 00:22:55.543 15:03:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.543 15:03:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.543 15:03:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.543 15:03:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.543 15:03:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.543 15:03:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.543 15:03:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.543 15:03:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.543 15:03:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.543 15:03:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.543 15:03:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:55.543 15:03:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:55.543 15:03:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.543 15:03:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.543 15:03:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.543 15:03:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.543 15:03:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:55.543 15:03:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.543 15:03:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.543 15:03:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.543 15:03:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.543 15:03:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.543 15:03:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.543 15:03:41 -- paths/export.sh@5 -- # export PATH 00:22:55.543 15:03:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.543 15:03:41 -- nvmf/common.sh@47 -- # : 0 00:22:55.543 15:03:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:55.543 15:03:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:55.543 15:03:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.543 15:03:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.543 15:03:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.543 15:03:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:55.543 15:03:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:55.543 15:03:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:55.543 15:03:41 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:55.543 15:03:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:55.543 15:03:41 -- common/autotest_common.sh@10 -- # set +x 00:22:57.445 15:03:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:57.445 15:03:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:57.445 15:03:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:57.445 15:03:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:57.445 15:03:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:57.445 15:03:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:57.445 15:03:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:57.445 15:03:43 -- nvmf/common.sh@295 -- # net_devs=() 00:22:57.445 15:03:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:57.445 15:03:43 -- nvmf/common.sh@296 -- # e810=() 00:22:57.445 15:03:43 -- nvmf/common.sh@296 -- # local -ga e810 00:22:57.445 15:03:43 -- nvmf/common.sh@297 -- # x722=() 00:22:57.445 15:03:43 -- nvmf/common.sh@297 -- # local -ga x722 00:22:57.445 15:03:43 -- nvmf/common.sh@298 -- # mlx=() 00:22:57.445 15:03:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:57.445 15:03:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.445 15:03:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.445 15:03:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.445 15:03:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.445 15:03:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.445 15:03:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.445 15:03:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.446 15:03:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.446 15:03:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.446 15:03:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.446 15:03:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.446 15:03:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:57.446 15:03:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:57.446 15:03:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:57.446 15:03:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:57.446 15:03:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:57.446 15:03:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:57.446 15:03:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.446 15:03:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:57.446 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:57.446 15:03:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.446 15:03:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.446 15:03:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.446 15:03:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.446 15:03:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.446 15:03:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.446 15:03:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:57.446 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:57.446 15:03:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.446 15:03:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.446 15:03:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.446 15:03:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.446 15:03:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.446 15:03:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:57.446 15:03:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:57.446 15:03:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:57.446 15:03:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.446 15:03:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.446 15:03:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:57.446 15:03:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.446 15:03:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:57.446 Found net devices under 0000:84:00.0: cvl_0_0 00:22:57.446 15:03:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.446 15:03:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.446 15:03:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.446 15:03:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:57.446 15:03:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.446 15:03:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:57.446 Found net devices under 0000:84:00.1: cvl_0_1 00:22:57.446 15:03:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.446 15:03:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:57.446 15:03:43 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.446 15:03:43 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:57.446 15:03:43 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:57.446 15:03:43 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:22:57.446 15:03:43 -- target/perf_adq.sh@52 -- # rmmod ice 00:22:58.384 15:03:43 -- target/perf_adq.sh@53 -- # modprobe ice 00:22:59.758 15:03:45 -- target/perf_adq.sh@54 -- # sleep 5 00:23:05.022 15:03:50 -- target/perf_adq.sh@67 -- # nvmftestinit 00:23:05.022 15:03:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:05.022 15:03:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.022 15:03:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:05.022 15:03:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:05.022 15:03:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:05.022 15:03:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.022 15:03:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.022 15:03:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.022 15:03:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:05.022 15:03:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:05.022 15:03:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:05.022 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:05.022 15:03:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:05.022 15:03:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:05.022 15:03:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:05.022 15:03:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:05.022 15:03:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:05.022 15:03:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:05.022 15:03:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:05.022 15:03:50 -- nvmf/common.sh@295 -- # net_devs=() 00:23:05.022 15:03:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:05.022 15:03:50 -- nvmf/common.sh@296 -- # e810=() 00:23:05.022 15:03:50 -- nvmf/common.sh@296 -- # local -ga e810 00:23:05.022 15:03:50 -- nvmf/common.sh@297 -- # x722=() 00:23:05.022 15:03:50 -- nvmf/common.sh@297 -- # local -ga x722 00:23:05.022 15:03:50 -- nvmf/common.sh@298 -- # mlx=() 00:23:05.022 15:03:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:05.022 15:03:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:05.022 15:03:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:05.022 15:03:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:05.022 15:03:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:05.022 15:03:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:05.022 15:03:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:05.022 15:03:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:05.022 15:03:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:05.022 15:03:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:05.022 15:03:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:05.022 15:03:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:05.022 15:03:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:05.022 15:03:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:05.023 15:03:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:05.023 15:03:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:05.023 15:03:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:05.023 15:03:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:05.023 15:03:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:05.023 15:03:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:05.023 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:05.023 15:03:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:05.023 15:03:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:05.023 15:03:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.023 15:03:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.023 15:03:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:05.023 15:03:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:05.023 15:03:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:05.023 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:05.023 15:03:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:05.023 15:03:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:05.023 15:03:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.023 15:03:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.023 15:03:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:05.023 15:03:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:05.023 15:03:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:05.023 15:03:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:05.023 15:03:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:05.023 15:03:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.023 15:03:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:05.023 15:03:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.023 15:03:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:05.023 Found net devices under 0000:84:00.0: cvl_0_0 00:23:05.023 15:03:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.023 15:03:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:05.023 15:03:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.023 15:03:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:05.023 15:03:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.023 15:03:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:05.023 Found net devices under 0000:84:00.1: cvl_0_1 00:23:05.023 15:03:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.023 15:03:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:05.023 15:03:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:05.023 15:03:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:05.023 15:03:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:05.023 15:03:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:05.023 15:03:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.023 15:03:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.023 15:03:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:05.023 15:03:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:05.023 15:03:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:05.023 15:03:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:05.023 15:03:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:05.023 15:03:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:05.023 15:03:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.023 15:03:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:05.023 15:03:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:05.023 15:03:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:05.023 15:03:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:05.023 15:03:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:05.023 15:03:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:05.023 15:03:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:05.023 15:03:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:05.023 15:03:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:05.023 15:03:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:05.023 15:03:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:05.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:23:05.023 00:23:05.023 --- 10.0.0.2 ping statistics --- 00:23:05.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.023 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:23:05.023 15:03:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:05.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:23:05.023 00:23:05.023 --- 10.0.0.1 ping statistics --- 00:23:05.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.023 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:23:05.023 15:03:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.023 15:03:50 -- nvmf/common.sh@411 -- # return 0 00:23:05.023 15:03:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:05.023 15:03:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.023 15:03:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:05.023 15:03:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:05.023 15:03:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.023 15:03:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:05.023 15:03:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:05.023 15:03:50 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:05.023 15:03:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:05.023 15:03:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:05.023 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:05.023 15:03:50 -- nvmf/common.sh@470 -- # nvmfpid=3833505 00:23:05.023 15:03:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:05.023 15:03:50 -- nvmf/common.sh@471 -- # waitforlisten 3833505 00:23:05.023 15:03:50 -- common/autotest_common.sh@817 -- # '[' -z 3833505 ']' 00:23:05.023 15:03:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.023 15:03:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:05.023 15:03:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.023 15:03:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:05.023 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:05.023 [2024-04-26 15:03:50.519506] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:23:05.023 [2024-04-26 15:03:50.519624] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.023 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.023 [2024-04-26 15:03:50.560534] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:05.023 [2024-04-26 15:03:50.588397] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:05.023 [2024-04-26 15:03:50.680178] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.023 [2024-04-26 15:03:50.680245] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.023 [2024-04-26 15:03:50.680260] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.023 [2024-04-26 15:03:50.680272] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.023 [2024-04-26 15:03:50.680284] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.023 [2024-04-26 15:03:50.680405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.023 [2024-04-26 15:03:50.680433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.023 [2024-04-26 15:03:50.680492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:05.023 [2024-04-26 15:03:50.680494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.023 15:03:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:05.023 15:03:50 -- common/autotest_common.sh@850 -- # return 0 00:23:05.023 15:03:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:05.023 15:03:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:05.023 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:05.282 15:03:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.282 15:03:50 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:23:05.282 15:03:50 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:05.282 15:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.282 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:05.282 15:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.282 15:03:50 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:23:05.282 15:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.282 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:05.282 15:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.282 15:03:50 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:05.282 15:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.282 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:05.282 [2024-04-26 15:03:50.902869] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.282 15:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.282 15:03:50 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:05.282 15:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.282 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:05.282 Malloc1 00:23:05.282 15:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.282 15:03:50 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:05.282 15:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.282 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:05.282 15:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.282 15:03:50 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:05.282 15:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.282 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:05.282 15:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.282 15:03:50 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:05.282 15:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.282 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:05.282 [2024-04-26 15:03:50.956203] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.283 15:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.283 15:03:50 -- target/perf_adq.sh@73 -- # perfpid=3833534 00:23:05.283 15:03:50 -- target/perf_adq.sh@74 -- # sleep 2 00:23:05.283 15:03:50 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:05.283 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.812 15:03:52 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:23:07.812 15:03:52 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:07.812 15:03:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.812 15:03:52 -- target/perf_adq.sh@76 -- # wc -l 00:23:07.812 15:03:52 -- common/autotest_common.sh@10 -- # set +x 00:23:07.812 15:03:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.812 15:03:53 -- target/perf_adq.sh@76 -- # count=4 00:23:07.812 15:03:53 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:23:07.812 15:03:53 -- target/perf_adq.sh@81 -- # wait 3833534 00:23:15.920 Initializing NVMe Controllers 00:23:15.920 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:15.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:15.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:15.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:15.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:15.920 Initialization complete. Launching workers. 00:23:15.920 ======================================================== 00:23:15.920 Latency(us) 00:23:15.920 Device Information : IOPS MiB/s Average min max 00:23:15.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10014.04 39.12 6390.58 2053.65 9431.72 00:23:15.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10305.53 40.26 6209.62 2528.25 8872.40 00:23:15.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10421.22 40.71 6141.17 1872.30 9201.78 00:23:15.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10173.03 39.74 6291.14 2148.76 9395.68 00:23:15.920 ======================================================== 00:23:15.920 Total : 40913.82 159.82 6256.75 1872.30 9431.72 00:23:15.920 00:23:15.920 15:04:01 -- target/perf_adq.sh@82 -- # nvmftestfini 00:23:15.920 15:04:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:15.920 15:04:01 -- nvmf/common.sh@117 -- # sync 00:23:15.920 15:04:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:15.920 15:04:01 -- nvmf/common.sh@120 -- # set +e 00:23:15.920 15:04:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:15.920 15:04:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:15.920 rmmod nvme_tcp 00:23:15.920 rmmod nvme_fabrics 00:23:15.920 rmmod nvme_keyring 00:23:15.920 15:04:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:15.920 15:04:01 -- nvmf/common.sh@124 -- # set -e 00:23:15.920 15:04:01 -- nvmf/common.sh@125 -- # return 0 00:23:15.920 15:04:01 -- nvmf/common.sh@478 -- # '[' -n 3833505 ']' 00:23:15.920 15:04:01 -- nvmf/common.sh@479 -- # killprocess 3833505 00:23:15.920 15:04:01 -- common/autotest_common.sh@936 -- # '[' -z 3833505 ']' 00:23:15.920 15:04:01 -- common/autotest_common.sh@940 -- # kill -0 3833505 00:23:15.920 15:04:01 -- common/autotest_common.sh@941 -- # uname 00:23:15.920 15:04:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:15.920 15:04:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3833505 00:23:15.920 15:04:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:15.920 15:04:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:15.920 15:04:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3833505' 00:23:15.920 killing process with pid 3833505 00:23:15.920 15:04:01 -- common/autotest_common.sh@955 -- # kill 3833505 00:23:15.920 15:04:01 -- common/autotest_common.sh@960 -- # wait 3833505 00:23:15.920 15:04:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:15.920 15:04:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:15.920 15:04:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:15.920 15:04:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:15.920 15:04:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:15.920 15:04:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.920 15:04:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:15.920 15:04:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.850 15:04:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:17.850 15:04:03 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:23:17.850 15:04:03 -- target/perf_adq.sh@52 -- # rmmod ice 00:23:18.423 15:04:04 -- target/perf_adq.sh@53 -- # modprobe ice 00:23:19.797 15:04:05 -- target/perf_adq.sh@54 -- # sleep 5 00:23:25.063 15:04:10 -- target/perf_adq.sh@87 -- # nvmftestinit 00:23:25.063 15:04:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:25.063 15:04:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.063 15:04:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:25.063 15:04:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:25.063 15:04:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:25.063 15:04:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.063 15:04:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.063 15:04:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.063 15:04:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:25.063 15:04:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:25.063 15:04:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:25.063 15:04:10 -- common/autotest_common.sh@10 -- # set +x 00:23:25.063 15:04:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:25.063 15:04:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:25.063 15:04:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:25.063 15:04:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:25.063 15:04:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:25.063 15:04:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:25.063 15:04:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:25.063 15:04:10 -- nvmf/common.sh@295 -- # net_devs=() 00:23:25.063 15:04:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:25.063 15:04:10 -- nvmf/common.sh@296 -- # e810=() 00:23:25.063 15:04:10 -- nvmf/common.sh@296 -- # local -ga e810 00:23:25.063 15:04:10 -- nvmf/common.sh@297 -- # x722=() 00:23:25.063 15:04:10 -- nvmf/common.sh@297 -- # local -ga x722 00:23:25.063 15:04:10 -- nvmf/common.sh@298 -- # mlx=() 00:23:25.063 15:04:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:25.063 15:04:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.063 15:04:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.063 15:04:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.063 15:04:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.063 15:04:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.063 15:04:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.063 15:04:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.063 15:04:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.063 15:04:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.063 15:04:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.063 15:04:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.063 15:04:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:25.063 15:04:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:25.063 15:04:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:25.063 15:04:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:25.063 15:04:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:25.063 15:04:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:25.063 15:04:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.063 15:04:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:25.063 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:25.063 15:04:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.063 15:04:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.063 15:04:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.063 15:04:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.063 15:04:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.063 15:04:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.063 15:04:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:25.063 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:25.063 15:04:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.063 15:04:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.063 15:04:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.063 15:04:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.063 15:04:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.063 15:04:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:25.063 15:04:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:25.063 15:04:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:25.063 15:04:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.063 15:04:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.063 15:04:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:25.063 15:04:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.063 15:04:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:25.063 Found net devices under 0000:84:00.0: cvl_0_0 00:23:25.063 15:04:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.063 15:04:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.063 15:04:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.063 15:04:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:25.063 15:04:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.063 15:04:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:25.063 Found net devices under 0000:84:00.1: cvl_0_1 00:23:25.064 15:04:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.064 15:04:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:25.064 15:04:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:25.064 15:04:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:25.064 15:04:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:25.064 15:04:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:25.064 15:04:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.064 15:04:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.064 15:04:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.064 15:04:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:25.064 15:04:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.064 15:04:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.064 15:04:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:25.064 15:04:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.064 15:04:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.064 15:04:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:25.064 15:04:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:25.064 15:04:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.064 15:04:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.064 15:04:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.064 15:04:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.064 15:04:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:25.064 15:04:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.064 15:04:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.064 15:04:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.064 15:04:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:25.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:23:25.064 00:23:25.064 --- 10.0.0.2 ping statistics --- 00:23:25.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.064 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:23:25.064 15:04:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:23:25.064 00:23:25.064 --- 10.0.0.1 ping statistics --- 00:23:25.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.064 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:23:25.064 15:04:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.064 15:04:10 -- nvmf/common.sh@411 -- # return 0 00:23:25.064 15:04:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:25.064 15:04:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.064 15:04:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:25.064 15:04:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:25.064 15:04:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.064 15:04:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:25.064 15:04:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:25.064 15:04:10 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:23:25.064 15:04:10 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:25.064 15:04:10 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:25.064 15:04:10 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:25.064 net.core.busy_poll = 1 00:23:25.064 15:04:10 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:25.064 net.core.busy_read = 1 00:23:25.064 15:04:10 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:25.064 15:04:10 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:25.064 15:04:10 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:25.064 15:04:10 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:25.064 15:04:10 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:25.064 15:04:10 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:25.064 15:04:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:25.064 15:04:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:25.064 15:04:10 -- common/autotest_common.sh@10 -- # set +x 00:23:25.064 15:04:10 -- nvmf/common.sh@470 -- # nvmfpid=3836137 00:23:25.064 15:04:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:25.064 15:04:10 -- nvmf/common.sh@471 -- # waitforlisten 3836137 00:23:25.064 15:04:10 -- common/autotest_common.sh@817 -- # '[' -z 3836137 ']' 00:23:25.064 15:04:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.064 15:04:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:25.064 15:04:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.064 15:04:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:25.064 15:04:10 -- common/autotest_common.sh@10 -- # set +x 00:23:25.323 [2024-04-26 15:04:10.816787] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:23:25.323 [2024-04-26 15:04:10.816860] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.323 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.323 [2024-04-26 15:04:10.853624] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:25.323 [2024-04-26 15:04:10.880245] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:25.323 [2024-04-26 15:04:10.968940] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.323 [2024-04-26 15:04:10.968996] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.323 [2024-04-26 15:04:10.969025] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.323 [2024-04-26 15:04:10.969039] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.323 [2024-04-26 15:04:10.969064] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.323 [2024-04-26 15:04:10.969174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.323 [2024-04-26 15:04:10.969234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.323 [2024-04-26 15:04:10.969301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.323 [2024-04-26 15:04:10.969304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.323 15:04:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:25.323 15:04:11 -- common/autotest_common.sh@850 -- # return 0 00:23:25.323 15:04:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:25.323 15:04:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:25.323 15:04:11 -- common/autotest_common.sh@10 -- # set +x 00:23:25.581 15:04:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.581 15:04:11 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:23:25.581 15:04:11 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:25.581 15:04:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.581 15:04:11 -- common/autotest_common.sh@10 -- # set +x 00:23:25.581 15:04:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.581 15:04:11 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:23:25.581 15:04:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.581 15:04:11 -- common/autotest_common.sh@10 -- # set +x 00:23:25.581 15:04:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.581 15:04:11 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:25.581 15:04:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.581 15:04:11 -- common/autotest_common.sh@10 -- # set +x 00:23:25.581 [2024-04-26 15:04:11.188835] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.581 15:04:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.581 15:04:11 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:25.581 15:04:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.581 15:04:11 -- common/autotest_common.sh@10 -- # set +x 00:23:25.581 Malloc1 00:23:25.581 15:04:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.581 15:04:11 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:25.581 15:04:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.581 15:04:11 -- common/autotest_common.sh@10 -- # set +x 00:23:25.581 15:04:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.581 15:04:11 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:25.581 15:04:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.581 15:04:11 -- common/autotest_common.sh@10 -- # set +x 00:23:25.581 15:04:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.581 15:04:11 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:25.581 15:04:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.581 15:04:11 -- common/autotest_common.sh@10 -- # set +x 00:23:25.581 [2024-04-26 15:04:11.242493] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.581 15:04:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.581 15:04:11 -- target/perf_adq.sh@94 -- # perfpid=3836171 00:23:25.581 15:04:11 -- target/perf_adq.sh@95 -- # sleep 2 00:23:25.581 15:04:11 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:25.581 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.128 15:04:13 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:23:28.128 15:04:13 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:28.128 15:04:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.128 15:04:13 -- common/autotest_common.sh@10 -- # set +x 00:23:28.128 15:04:13 -- target/perf_adq.sh@97 -- # wc -l 00:23:28.128 15:04:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.128 15:04:13 -- target/perf_adq.sh@97 -- # count=2 00:23:28.128 15:04:13 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:23:28.128 15:04:13 -- target/perf_adq.sh@103 -- # wait 3836171 00:23:36.237 Initializing NVMe Controllers 00:23:36.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:36.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:36.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:36.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:36.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:36.237 Initialization complete. Launching workers. 00:23:36.237 ======================================================== 00:23:36.237 Latency(us) 00:23:36.237 Device Information : IOPS MiB/s Average min max 00:23:36.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4820.49 18.83 13323.43 1697.11 61394.50 00:23:36.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4796.69 18.74 13344.40 2243.38 60169.01 00:23:36.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4192.99 16.38 15307.20 1946.75 61952.90 00:23:36.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12884.26 50.33 4968.04 1466.12 7286.45 00:23:36.237 ======================================================== 00:23:36.237 Total : 26694.42 104.28 9606.01 1466.12 61952.90 00:23:36.237 00:23:36.237 15:04:21 -- target/perf_adq.sh@104 -- # nvmftestfini 00:23:36.237 15:04:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:36.237 15:04:21 -- nvmf/common.sh@117 -- # sync 00:23:36.237 15:04:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:36.237 15:04:21 -- nvmf/common.sh@120 -- # set +e 00:23:36.237 15:04:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:36.237 15:04:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:36.237 rmmod nvme_tcp 00:23:36.237 rmmod nvme_fabrics 00:23:36.237 rmmod nvme_keyring 00:23:36.237 15:04:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.237 15:04:21 -- nvmf/common.sh@124 -- # set -e 00:23:36.237 15:04:21 -- nvmf/common.sh@125 -- # return 0 00:23:36.237 15:04:21 -- nvmf/common.sh@478 -- # '[' -n 3836137 ']' 00:23:36.237 15:04:21 -- nvmf/common.sh@479 -- # killprocess 3836137 00:23:36.237 15:04:21 -- common/autotest_common.sh@936 -- # '[' -z 3836137 ']' 00:23:36.237 15:04:21 -- common/autotest_common.sh@940 -- # kill -0 3836137 00:23:36.237 15:04:21 -- common/autotest_common.sh@941 -- # uname 00:23:36.237 15:04:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:36.237 15:04:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3836137 00:23:36.237 15:04:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:36.237 15:04:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:36.237 15:04:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3836137' 00:23:36.237 killing process with pid 3836137 00:23:36.237 15:04:21 -- common/autotest_common.sh@955 -- # kill 3836137 00:23:36.237 15:04:21 -- common/autotest_common.sh@960 -- # wait 3836137 00:23:36.237 15:04:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:36.237 15:04:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:36.237 15:04:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:36.237 15:04:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.237 15:04:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.237 15:04:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.237 15:04:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.237 15:04:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.519 15:04:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.519 15:04:24 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:23:39.519 00:23:39.519 real 0m43.607s 00:23:39.519 user 2m39.211s 00:23:39.519 sys 0m9.660s 00:23:39.519 15:04:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:39.519 15:04:24 -- common/autotest_common.sh@10 -- # set +x 00:23:39.519 ************************************ 00:23:39.519 END TEST nvmf_perf_adq 00:23:39.519 ************************************ 00:23:39.519 15:04:24 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:39.519 15:04:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:39.519 15:04:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:39.519 15:04:24 -- common/autotest_common.sh@10 -- # set +x 00:23:39.519 ************************************ 00:23:39.519 START TEST nvmf_shutdown 00:23:39.519 ************************************ 00:23:39.519 15:04:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:39.519 * Looking for test storage... 00:23:39.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:39.519 15:04:24 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.519 15:04:24 -- nvmf/common.sh@7 -- # uname -s 00:23:39.519 15:04:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.519 15:04:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.519 15:04:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.519 15:04:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.519 15:04:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.519 15:04:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.519 15:04:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.519 15:04:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.519 15:04:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.519 15:04:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.519 15:04:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:39.519 15:04:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:39.519 15:04:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.519 15:04:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.519 15:04:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.519 15:04:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.519 15:04:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.519 15:04:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.519 15:04:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.519 15:04:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.519 15:04:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.519 15:04:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.519 15:04:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.519 15:04:24 -- paths/export.sh@5 -- # export PATH 00:23:39.519 15:04:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.519 15:04:24 -- nvmf/common.sh@47 -- # : 0 00:23:39.519 15:04:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.519 15:04:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.519 15:04:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.519 15:04:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.519 15:04:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.519 15:04:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.519 15:04:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.519 15:04:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.519 15:04:24 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:39.519 15:04:24 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:39.519 15:04:24 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:39.519 15:04:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:39.519 15:04:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:39.519 15:04:24 -- common/autotest_common.sh@10 -- # set +x 00:23:39.519 ************************************ 00:23:39.519 START TEST nvmf_shutdown_tc1 00:23:39.519 ************************************ 00:23:39.519 15:04:25 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:23:39.519 15:04:25 -- target/shutdown.sh@74 -- # starttarget 00:23:39.519 15:04:25 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:39.519 15:04:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:39.519 15:04:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.519 15:04:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:39.519 15:04:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:39.519 15:04:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:39.519 15:04:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.519 15:04:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.519 15:04:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.519 15:04:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:39.519 15:04:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:39.519 15:04:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.519 15:04:25 -- common/autotest_common.sh@10 -- # set +x 00:23:41.420 15:04:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:41.420 15:04:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:41.420 15:04:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:41.420 15:04:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:41.420 15:04:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:41.420 15:04:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:41.420 15:04:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:41.420 15:04:27 -- nvmf/common.sh@295 -- # net_devs=() 00:23:41.420 15:04:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:41.420 15:04:27 -- nvmf/common.sh@296 -- # e810=() 00:23:41.420 15:04:27 -- nvmf/common.sh@296 -- # local -ga e810 00:23:41.420 15:04:27 -- nvmf/common.sh@297 -- # x722=() 00:23:41.420 15:04:27 -- nvmf/common.sh@297 -- # local -ga x722 00:23:41.420 15:04:27 -- nvmf/common.sh@298 -- # mlx=() 00:23:41.420 15:04:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:41.420 15:04:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.420 15:04:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.420 15:04:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.420 15:04:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.420 15:04:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.420 15:04:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.420 15:04:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.420 15:04:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.420 15:04:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.420 15:04:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.420 15:04:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.420 15:04:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:41.420 15:04:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:41.420 15:04:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:41.420 15:04:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:41.420 15:04:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:41.420 15:04:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:41.420 15:04:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.420 15:04:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:41.420 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:41.420 15:04:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.420 15:04:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.420 15:04:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.420 15:04:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.420 15:04:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.420 15:04:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.420 15:04:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:41.420 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:41.420 15:04:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.420 15:04:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.420 15:04:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.420 15:04:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.420 15:04:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.420 15:04:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:41.420 15:04:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:41.420 15:04:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:41.420 15:04:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.420 15:04:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.420 15:04:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:41.420 15:04:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.420 15:04:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:41.420 Found net devices under 0000:84:00.0: cvl_0_0 00:23:41.420 15:04:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.420 15:04:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.420 15:04:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.420 15:04:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:41.420 15:04:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.420 15:04:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:41.420 Found net devices under 0000:84:00.1: cvl_0_1 00:23:41.420 15:04:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.420 15:04:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:41.420 15:04:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:41.420 15:04:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:41.420 15:04:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:41.420 15:04:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:41.420 15:04:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.420 15:04:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.420 15:04:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.420 15:04:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:41.420 15:04:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.420 15:04:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.420 15:04:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:41.420 15:04:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.420 15:04:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.420 15:04:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:41.420 15:04:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:41.421 15:04:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.421 15:04:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.421 15:04:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.421 15:04:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.421 15:04:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:41.421 15:04:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.679 15:04:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.679 15:04:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.679 15:04:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:41.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:23:41.679 00:23:41.679 --- 10.0.0.2 ping statistics --- 00:23:41.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.679 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:41.679 15:04:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:23:41.679 00:23:41.679 --- 10.0.0.1 ping statistics --- 00:23:41.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.679 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:41.679 15:04:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.679 15:04:27 -- nvmf/common.sh@411 -- # return 0 00:23:41.679 15:04:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:41.679 15:04:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.679 15:04:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:41.679 15:04:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:41.679 15:04:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.679 15:04:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:41.679 15:04:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:41.679 15:04:27 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:41.679 15:04:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:41.679 15:04:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:41.679 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:23:41.679 15:04:27 -- nvmf/common.sh@470 -- # nvmfpid=3839492 00:23:41.679 15:04:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:41.679 15:04:27 -- nvmf/common.sh@471 -- # waitforlisten 3839492 00:23:41.679 15:04:27 -- common/autotest_common.sh@817 -- # '[' -z 3839492 ']' 00:23:41.679 15:04:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.679 15:04:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:41.679 15:04:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.679 15:04:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:41.679 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:23:41.679 [2024-04-26 15:04:27.274908] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:23:41.679 [2024-04-26 15:04:27.274978] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.679 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.679 [2024-04-26 15:04:27.311603] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:41.679 [2024-04-26 15:04:27.338522] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:41.938 [2024-04-26 15:04:27.425807] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.938 [2024-04-26 15:04:27.425866] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.938 [2024-04-26 15:04:27.425897] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.938 [2024-04-26 15:04:27.425908] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.938 [2024-04-26 15:04:27.425919] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.938 [2024-04-26 15:04:27.426019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.938 [2024-04-26 15:04:27.426092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:41.938 [2024-04-26 15:04:27.426153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:41.938 [2024-04-26 15:04:27.426156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.938 15:04:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:41.938 15:04:27 -- common/autotest_common.sh@850 -- # return 0 00:23:41.938 15:04:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:41.938 15:04:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:41.938 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:23:41.938 15:04:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.938 15:04:27 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.938 15:04:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:41.938 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:23:41.938 [2024-04-26 15:04:27.585892] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.938 15:04:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:41.938 15:04:27 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:41.938 15:04:27 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:41.938 15:04:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:41.938 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:23:41.938 15:04:27 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:41.938 15:04:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.938 15:04:27 -- target/shutdown.sh@28 -- # cat 00:23:41.938 15:04:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.938 15:04:27 -- target/shutdown.sh@28 -- # cat 00:23:41.938 15:04:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.938 15:04:27 -- target/shutdown.sh@28 -- # cat 00:23:41.938 15:04:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.938 15:04:27 -- target/shutdown.sh@28 -- # cat 00:23:41.938 15:04:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.938 15:04:27 -- target/shutdown.sh@28 -- # cat 00:23:41.938 15:04:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.938 15:04:27 -- target/shutdown.sh@28 -- # cat 00:23:41.938 15:04:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.938 15:04:27 -- target/shutdown.sh@28 -- # cat 00:23:41.938 15:04:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.938 15:04:27 -- target/shutdown.sh@28 -- # cat 00:23:41.938 15:04:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.938 15:04:27 -- target/shutdown.sh@28 -- # cat 00:23:41.938 15:04:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:41.938 15:04:27 -- target/shutdown.sh@28 -- # cat 00:23:41.938 15:04:27 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:41.938 15:04:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:41.938 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:23:41.938 Malloc1 00:23:41.938 [2024-04-26 15:04:27.669970] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.197 Malloc2 00:23:42.197 Malloc3 00:23:42.197 Malloc4 00:23:42.197 Malloc5 00:23:42.197 Malloc6 00:23:42.455 Malloc7 00:23:42.455 Malloc8 00:23:42.455 Malloc9 00:23:42.455 Malloc10 00:23:42.455 15:04:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.455 15:04:28 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:42.455 15:04:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:42.455 15:04:28 -- common/autotest_common.sh@10 -- # set +x 00:23:42.455 15:04:28 -- target/shutdown.sh@78 -- # perfpid=3839668 00:23:42.455 15:04:28 -- target/shutdown.sh@79 -- # waitforlisten 3839668 /var/tmp/bdevperf.sock 00:23:42.455 15:04:28 -- common/autotest_common.sh@817 -- # '[' -z 3839668 ']' 00:23:42.455 15:04:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.455 15:04:28 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:42.456 15:04:28 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:42.456 15:04:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:42.456 15:04:28 -- nvmf/common.sh@521 -- # config=() 00:23:42.456 15:04:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.456 15:04:28 -- nvmf/common.sh@521 -- # local subsystem config 00:23:42.456 15:04:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:42.456 15:04:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:42.456 15:04:28 -- common/autotest_common.sh@10 -- # set +x 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:42.456 { 00:23:42.456 "params": { 00:23:42.456 "name": "Nvme$subsystem", 00:23:42.456 "trtype": "$TEST_TRANSPORT", 00:23:42.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.456 "adrfam": "ipv4", 00:23:42.456 "trsvcid": "$NVMF_PORT", 00:23:42.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.456 "hdgst": ${hdgst:-false}, 00:23:42.456 "ddgst": ${ddgst:-false} 00:23:42.456 }, 00:23:42.456 "method": "bdev_nvme_attach_controller" 00:23:42.456 } 00:23:42.456 EOF 00:23:42.456 )") 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # cat 00:23:42.456 15:04:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:42.456 { 00:23:42.456 "params": { 00:23:42.456 "name": "Nvme$subsystem", 00:23:42.456 "trtype": "$TEST_TRANSPORT", 00:23:42.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.456 "adrfam": "ipv4", 00:23:42.456 "trsvcid": "$NVMF_PORT", 00:23:42.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.456 "hdgst": ${hdgst:-false}, 00:23:42.456 "ddgst": ${ddgst:-false} 00:23:42.456 }, 00:23:42.456 "method": "bdev_nvme_attach_controller" 00:23:42.456 } 00:23:42.456 EOF 00:23:42.456 )") 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # cat 00:23:42.456 15:04:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:42.456 { 00:23:42.456 "params": { 00:23:42.456 "name": "Nvme$subsystem", 00:23:42.456 "trtype": "$TEST_TRANSPORT", 00:23:42.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.456 "adrfam": "ipv4", 00:23:42.456 "trsvcid": "$NVMF_PORT", 00:23:42.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.456 "hdgst": ${hdgst:-false}, 00:23:42.456 "ddgst": ${ddgst:-false} 00:23:42.456 }, 00:23:42.456 "method": "bdev_nvme_attach_controller" 00:23:42.456 } 00:23:42.456 EOF 00:23:42.456 )") 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # cat 00:23:42.456 15:04:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:42.456 { 00:23:42.456 "params": { 00:23:42.456 "name": "Nvme$subsystem", 00:23:42.456 "trtype": "$TEST_TRANSPORT", 00:23:42.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.456 "adrfam": "ipv4", 00:23:42.456 "trsvcid": "$NVMF_PORT", 00:23:42.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.456 "hdgst": ${hdgst:-false}, 00:23:42.456 "ddgst": ${ddgst:-false} 00:23:42.456 }, 00:23:42.456 "method": "bdev_nvme_attach_controller" 00:23:42.456 } 00:23:42.456 EOF 00:23:42.456 )") 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # cat 00:23:42.456 15:04:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:42.456 { 00:23:42.456 "params": { 00:23:42.456 "name": "Nvme$subsystem", 00:23:42.456 "trtype": "$TEST_TRANSPORT", 00:23:42.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.456 "adrfam": "ipv4", 00:23:42.456 "trsvcid": "$NVMF_PORT", 00:23:42.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.456 "hdgst": ${hdgst:-false}, 00:23:42.456 "ddgst": ${ddgst:-false} 00:23:42.456 }, 00:23:42.456 "method": "bdev_nvme_attach_controller" 00:23:42.456 } 00:23:42.456 EOF 00:23:42.456 )") 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # cat 00:23:42.456 15:04:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:42.456 { 00:23:42.456 "params": { 00:23:42.456 "name": "Nvme$subsystem", 00:23:42.456 "trtype": "$TEST_TRANSPORT", 00:23:42.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.456 "adrfam": "ipv4", 00:23:42.456 "trsvcid": "$NVMF_PORT", 00:23:42.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.456 "hdgst": ${hdgst:-false}, 00:23:42.456 "ddgst": ${ddgst:-false} 00:23:42.456 }, 00:23:42.456 "method": "bdev_nvme_attach_controller" 00:23:42.456 } 00:23:42.456 EOF 00:23:42.456 )") 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # cat 00:23:42.456 15:04:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:42.456 { 00:23:42.456 "params": { 00:23:42.456 "name": "Nvme$subsystem", 00:23:42.456 "trtype": "$TEST_TRANSPORT", 00:23:42.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.456 "adrfam": "ipv4", 00:23:42.456 "trsvcid": "$NVMF_PORT", 00:23:42.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.456 "hdgst": ${hdgst:-false}, 00:23:42.456 "ddgst": ${ddgst:-false} 00:23:42.456 }, 00:23:42.456 "method": "bdev_nvme_attach_controller" 00:23:42.456 } 00:23:42.456 EOF 00:23:42.456 )") 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # cat 00:23:42.456 15:04:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:42.456 { 00:23:42.456 "params": { 00:23:42.456 "name": "Nvme$subsystem", 00:23:42.456 "trtype": "$TEST_TRANSPORT", 00:23:42.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.456 "adrfam": "ipv4", 00:23:42.456 "trsvcid": "$NVMF_PORT", 00:23:42.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.456 "hdgst": ${hdgst:-false}, 00:23:42.456 "ddgst": ${ddgst:-false} 00:23:42.456 }, 00:23:42.456 "method": "bdev_nvme_attach_controller" 00:23:42.456 } 00:23:42.456 EOF 00:23:42.456 )") 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # cat 00:23:42.456 15:04:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:42.456 { 00:23:42.456 "params": { 00:23:42.456 "name": "Nvme$subsystem", 00:23:42.456 "trtype": "$TEST_TRANSPORT", 00:23:42.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.456 "adrfam": "ipv4", 00:23:42.456 "trsvcid": "$NVMF_PORT", 00:23:42.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.456 "hdgst": ${hdgst:-false}, 00:23:42.456 "ddgst": ${ddgst:-false} 00:23:42.456 }, 00:23:42.456 "method": "bdev_nvme_attach_controller" 00:23:42.456 } 00:23:42.456 EOF 00:23:42.456 )") 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # cat 00:23:42.456 15:04:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:42.456 { 00:23:42.456 "params": { 00:23:42.456 "name": "Nvme$subsystem", 00:23:42.456 "trtype": "$TEST_TRANSPORT", 00:23:42.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.456 "adrfam": "ipv4", 00:23:42.456 "trsvcid": "$NVMF_PORT", 00:23:42.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.456 "hdgst": ${hdgst:-false}, 00:23:42.456 "ddgst": ${ddgst:-false} 00:23:42.456 }, 00:23:42.456 "method": "bdev_nvme_attach_controller" 00:23:42.456 } 00:23:42.456 EOF 00:23:42.456 )") 00:23:42.456 15:04:28 -- nvmf/common.sh@543 -- # cat 00:23:42.456 15:04:28 -- nvmf/common.sh@545 -- # jq . 00:23:42.456 15:04:28 -- nvmf/common.sh@546 -- # IFS=, 00:23:42.456 15:04:28 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:42.456 "params": { 00:23:42.456 "name": "Nvme1", 00:23:42.456 "trtype": "tcp", 00:23:42.456 "traddr": "10.0.0.2", 00:23:42.456 "adrfam": "ipv4", 00:23:42.456 "trsvcid": "4420", 00:23:42.456 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.456 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:42.456 "hdgst": false, 00:23:42.456 "ddgst": false 00:23:42.456 }, 00:23:42.456 "method": "bdev_nvme_attach_controller" 00:23:42.456 },{ 00:23:42.456 "params": { 00:23:42.456 "name": "Nvme2", 00:23:42.456 "trtype": "tcp", 00:23:42.456 "traddr": "10.0.0.2", 00:23:42.456 "adrfam": "ipv4", 00:23:42.456 "trsvcid": "4420", 00:23:42.456 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:42.456 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:42.456 "hdgst": false, 00:23:42.456 "ddgst": false 00:23:42.456 }, 00:23:42.456 "method": "bdev_nvme_attach_controller" 00:23:42.456 },{ 00:23:42.456 "params": { 00:23:42.456 "name": "Nvme3", 00:23:42.456 "trtype": "tcp", 00:23:42.456 "traddr": "10.0.0.2", 00:23:42.456 "adrfam": "ipv4", 00:23:42.456 "trsvcid": "4420", 00:23:42.456 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:42.456 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:42.456 "hdgst": false, 00:23:42.457 "ddgst": false 00:23:42.457 }, 00:23:42.457 "method": "bdev_nvme_attach_controller" 00:23:42.457 },{ 00:23:42.457 "params": { 00:23:42.457 "name": "Nvme4", 00:23:42.457 "trtype": "tcp", 00:23:42.457 "traddr": "10.0.0.2", 00:23:42.457 "adrfam": "ipv4", 00:23:42.457 "trsvcid": "4420", 00:23:42.457 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:42.457 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:42.457 "hdgst": false, 00:23:42.457 "ddgst": false 00:23:42.457 }, 00:23:42.457 "method": "bdev_nvme_attach_controller" 00:23:42.457 },{ 00:23:42.457 "params": { 00:23:42.457 "name": "Nvme5", 00:23:42.457 "trtype": "tcp", 00:23:42.457 "traddr": "10.0.0.2", 00:23:42.457 "adrfam": "ipv4", 00:23:42.457 "trsvcid": "4420", 00:23:42.457 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:42.457 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:42.457 "hdgst": false, 00:23:42.457 "ddgst": false 00:23:42.457 }, 00:23:42.457 "method": "bdev_nvme_attach_controller" 00:23:42.457 },{ 00:23:42.457 "params": { 00:23:42.457 "name": "Nvme6", 00:23:42.457 "trtype": "tcp", 00:23:42.457 "traddr": "10.0.0.2", 00:23:42.457 "adrfam": "ipv4", 00:23:42.457 "trsvcid": "4420", 00:23:42.457 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:42.457 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:42.457 "hdgst": false, 00:23:42.457 "ddgst": false 00:23:42.457 }, 00:23:42.457 "method": "bdev_nvme_attach_controller" 00:23:42.457 },{ 00:23:42.457 "params": { 00:23:42.457 "name": "Nvme7", 00:23:42.457 "trtype": "tcp", 00:23:42.457 "traddr": "10.0.0.2", 00:23:42.457 "adrfam": "ipv4", 00:23:42.457 "trsvcid": "4420", 00:23:42.457 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:42.457 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:42.457 "hdgst": false, 00:23:42.457 "ddgst": false 00:23:42.457 }, 00:23:42.457 "method": "bdev_nvme_attach_controller" 00:23:42.457 },{ 00:23:42.457 "params": { 00:23:42.457 "name": "Nvme8", 00:23:42.457 "trtype": "tcp", 00:23:42.457 "traddr": "10.0.0.2", 00:23:42.457 "adrfam": "ipv4", 00:23:42.457 "trsvcid": "4420", 00:23:42.457 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:42.457 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:42.457 "hdgst": false, 00:23:42.457 "ddgst": false 00:23:42.457 }, 00:23:42.457 "method": "bdev_nvme_attach_controller" 00:23:42.457 },{ 00:23:42.457 "params": { 00:23:42.457 "name": "Nvme9", 00:23:42.457 "trtype": "tcp", 00:23:42.457 "traddr": "10.0.0.2", 00:23:42.457 "adrfam": "ipv4", 00:23:42.457 "trsvcid": "4420", 00:23:42.457 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:42.457 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:42.457 "hdgst": false, 00:23:42.457 "ddgst": false 00:23:42.457 }, 00:23:42.457 "method": "bdev_nvme_attach_controller" 00:23:42.457 },{ 00:23:42.457 "params": { 00:23:42.457 "name": "Nvme10", 00:23:42.457 "trtype": "tcp", 00:23:42.457 "traddr": "10.0.0.2", 00:23:42.457 "adrfam": "ipv4", 00:23:42.457 "trsvcid": "4420", 00:23:42.457 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:42.457 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:42.457 "hdgst": false, 00:23:42.457 "ddgst": false 00:23:42.457 }, 00:23:42.457 "method": "bdev_nvme_attach_controller" 00:23:42.457 }' 00:23:42.457 [2024-04-26 15:04:28.188471] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:23:42.457 [2024-04-26 15:04:28.188542] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:42.715 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.715 [2024-04-26 15:04:28.224708] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:42.715 [2024-04-26 15:04:28.254507] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.715 [2024-04-26 15:04:28.339728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.660 15:04:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:44.660 15:04:30 -- common/autotest_common.sh@850 -- # return 0 00:23:44.660 15:04:30 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:44.661 15:04:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:44.661 15:04:30 -- common/autotest_common.sh@10 -- # set +x 00:23:44.661 15:04:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:44.661 15:04:30 -- target/shutdown.sh@83 -- # kill -9 3839668 00:23:44.661 15:04:30 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:44.661 15:04:30 -- target/shutdown.sh@87 -- # sleep 1 00:23:45.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3839668 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:45.594 15:04:31 -- target/shutdown.sh@88 -- # kill -0 3839492 00:23:45.594 15:04:31 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:45.594 15:04:31 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:45.594 15:04:31 -- nvmf/common.sh@521 -- # config=() 00:23:45.594 15:04:31 -- nvmf/common.sh@521 -- # local subsystem config 00:23:45.594 15:04:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:45.594 15:04:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:45.594 { 00:23:45.594 "params": { 00:23:45.594 "name": "Nvme$subsystem", 00:23:45.594 "trtype": "$TEST_TRANSPORT", 00:23:45.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.594 "adrfam": "ipv4", 00:23:45.594 "trsvcid": "$NVMF_PORT", 00:23:45.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.594 "hdgst": ${hdgst:-false}, 00:23:45.594 "ddgst": ${ddgst:-false} 00:23:45.594 }, 00:23:45.594 "method": "bdev_nvme_attach_controller" 00:23:45.594 } 00:23:45.594 EOF 00:23:45.594 )") 00:23:45.594 15:04:31 -- nvmf/common.sh@543 -- # cat 00:23:45.594 15:04:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:45.594 15:04:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:45.594 { 00:23:45.594 "params": { 00:23:45.594 "name": "Nvme$subsystem", 00:23:45.594 "trtype": "$TEST_TRANSPORT", 00:23:45.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.594 "adrfam": "ipv4", 00:23:45.594 "trsvcid": "$NVMF_PORT", 00:23:45.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.594 "hdgst": ${hdgst:-false}, 00:23:45.594 "ddgst": ${ddgst:-false} 00:23:45.594 }, 00:23:45.594 "method": "bdev_nvme_attach_controller" 00:23:45.594 } 00:23:45.594 EOF 00:23:45.594 )") 00:23:45.594 15:04:31 -- nvmf/common.sh@543 -- # cat 00:23:45.594 15:04:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:45.594 15:04:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:45.594 { 00:23:45.594 "params": { 00:23:45.594 "name": "Nvme$subsystem", 00:23:45.594 "trtype": "$TEST_TRANSPORT", 00:23:45.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.594 "adrfam": "ipv4", 00:23:45.594 "trsvcid": "$NVMF_PORT", 00:23:45.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.594 "hdgst": ${hdgst:-false}, 00:23:45.594 "ddgst": ${ddgst:-false} 00:23:45.594 }, 00:23:45.594 "method": "bdev_nvme_attach_controller" 00:23:45.594 } 00:23:45.594 EOF 00:23:45.594 )") 00:23:45.594 15:04:31 -- nvmf/common.sh@543 -- # cat 00:23:45.594 15:04:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:45.594 15:04:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:45.594 { 00:23:45.594 "params": { 00:23:45.594 "name": "Nvme$subsystem", 00:23:45.594 "trtype": "$TEST_TRANSPORT", 00:23:45.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.594 "adrfam": "ipv4", 00:23:45.594 "trsvcid": "$NVMF_PORT", 00:23:45.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.594 "hdgst": ${hdgst:-false}, 00:23:45.594 "ddgst": ${ddgst:-false} 00:23:45.594 }, 00:23:45.594 "method": "bdev_nvme_attach_controller" 00:23:45.594 } 00:23:45.594 EOF 00:23:45.594 )") 00:23:45.594 15:04:31 -- nvmf/common.sh@543 -- # cat 00:23:45.594 15:04:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:45.594 15:04:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:45.594 { 00:23:45.594 "params": { 00:23:45.594 "name": "Nvme$subsystem", 00:23:45.594 "trtype": "$TEST_TRANSPORT", 00:23:45.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.594 "adrfam": "ipv4", 00:23:45.594 "trsvcid": "$NVMF_PORT", 00:23:45.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.594 "hdgst": ${hdgst:-false}, 00:23:45.594 "ddgst": ${ddgst:-false} 00:23:45.594 }, 00:23:45.594 "method": "bdev_nvme_attach_controller" 00:23:45.594 } 00:23:45.594 EOF 00:23:45.594 )") 00:23:45.594 15:04:31 -- nvmf/common.sh@543 -- # cat 00:23:45.594 15:04:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:45.594 15:04:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:45.594 { 00:23:45.594 "params": { 00:23:45.594 "name": "Nvme$subsystem", 00:23:45.594 "trtype": "$TEST_TRANSPORT", 00:23:45.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.594 "adrfam": "ipv4", 00:23:45.595 "trsvcid": "$NVMF_PORT", 00:23:45.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.595 "hdgst": ${hdgst:-false}, 00:23:45.595 "ddgst": ${ddgst:-false} 00:23:45.595 }, 00:23:45.595 "method": "bdev_nvme_attach_controller" 00:23:45.595 } 00:23:45.595 EOF 00:23:45.595 )") 00:23:45.595 15:04:31 -- nvmf/common.sh@543 -- # cat 00:23:45.595 15:04:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:45.595 15:04:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:45.595 { 00:23:45.595 "params": { 00:23:45.595 "name": "Nvme$subsystem", 00:23:45.595 "trtype": "$TEST_TRANSPORT", 00:23:45.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.595 "adrfam": "ipv4", 00:23:45.595 "trsvcid": "$NVMF_PORT", 00:23:45.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.595 "hdgst": ${hdgst:-false}, 00:23:45.595 "ddgst": ${ddgst:-false} 00:23:45.595 }, 00:23:45.595 "method": "bdev_nvme_attach_controller" 00:23:45.595 } 00:23:45.595 EOF 00:23:45.595 )") 00:23:45.595 15:04:31 -- nvmf/common.sh@543 -- # cat 00:23:45.595 15:04:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:45.595 15:04:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:45.595 { 00:23:45.595 "params": { 00:23:45.595 "name": "Nvme$subsystem", 00:23:45.595 "trtype": "$TEST_TRANSPORT", 00:23:45.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.595 "adrfam": "ipv4", 00:23:45.595 "trsvcid": "$NVMF_PORT", 00:23:45.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.595 "hdgst": ${hdgst:-false}, 00:23:45.595 "ddgst": ${ddgst:-false} 00:23:45.595 }, 00:23:45.595 "method": "bdev_nvme_attach_controller" 00:23:45.595 } 00:23:45.595 EOF 00:23:45.595 )") 00:23:45.595 15:04:31 -- nvmf/common.sh@543 -- # cat 00:23:45.595 15:04:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:45.595 15:04:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:45.595 { 00:23:45.595 "params": { 00:23:45.595 "name": "Nvme$subsystem", 00:23:45.595 "trtype": "$TEST_TRANSPORT", 00:23:45.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.595 "adrfam": "ipv4", 00:23:45.595 "trsvcid": "$NVMF_PORT", 00:23:45.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.595 "hdgst": ${hdgst:-false}, 00:23:45.595 "ddgst": ${ddgst:-false} 00:23:45.595 }, 00:23:45.595 "method": "bdev_nvme_attach_controller" 00:23:45.595 } 00:23:45.595 EOF 00:23:45.595 )") 00:23:45.595 15:04:31 -- nvmf/common.sh@543 -- # cat 00:23:45.595 15:04:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:45.595 15:04:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:45.595 { 00:23:45.595 "params": { 00:23:45.595 "name": "Nvme$subsystem", 00:23:45.595 "trtype": "$TEST_TRANSPORT", 00:23:45.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.595 "adrfam": "ipv4", 00:23:45.595 "trsvcid": "$NVMF_PORT", 00:23:45.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.595 "hdgst": ${hdgst:-false}, 00:23:45.595 "ddgst": ${ddgst:-false} 00:23:45.595 }, 00:23:45.595 "method": "bdev_nvme_attach_controller" 00:23:45.595 } 00:23:45.595 EOF 00:23:45.595 )") 00:23:45.595 15:04:31 -- nvmf/common.sh@543 -- # cat 00:23:45.595 15:04:31 -- nvmf/common.sh@545 -- # jq . 00:23:45.595 15:04:31 -- nvmf/common.sh@546 -- # IFS=, 00:23:45.595 15:04:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:45.595 "params": { 00:23:45.595 "name": "Nvme1", 00:23:45.595 "trtype": "tcp", 00:23:45.595 "traddr": "10.0.0.2", 00:23:45.595 "adrfam": "ipv4", 00:23:45.595 "trsvcid": "4420", 00:23:45.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:45.595 "hdgst": false, 00:23:45.595 "ddgst": false 00:23:45.595 }, 00:23:45.595 "method": "bdev_nvme_attach_controller" 00:23:45.595 },{ 00:23:45.595 "params": { 00:23:45.595 "name": "Nvme2", 00:23:45.595 "trtype": "tcp", 00:23:45.595 "traddr": "10.0.0.2", 00:23:45.595 "adrfam": "ipv4", 00:23:45.595 "trsvcid": "4420", 00:23:45.595 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:45.595 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:45.595 "hdgst": false, 00:23:45.595 "ddgst": false 00:23:45.595 }, 00:23:45.595 "method": "bdev_nvme_attach_controller" 00:23:45.595 },{ 00:23:45.595 "params": { 00:23:45.595 "name": "Nvme3", 00:23:45.595 "trtype": "tcp", 00:23:45.595 "traddr": "10.0.0.2", 00:23:45.595 "adrfam": "ipv4", 00:23:45.595 "trsvcid": "4420", 00:23:45.595 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:45.595 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:45.595 "hdgst": false, 00:23:45.595 "ddgst": false 00:23:45.595 }, 00:23:45.595 "method": "bdev_nvme_attach_controller" 00:23:45.595 },{ 00:23:45.595 "params": { 00:23:45.595 "name": "Nvme4", 00:23:45.595 "trtype": "tcp", 00:23:45.595 "traddr": "10.0.0.2", 00:23:45.595 "adrfam": "ipv4", 00:23:45.595 "trsvcid": "4420", 00:23:45.595 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:45.595 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:45.595 "hdgst": false, 00:23:45.595 "ddgst": false 00:23:45.595 }, 00:23:45.595 "method": "bdev_nvme_attach_controller" 00:23:45.595 },{ 00:23:45.595 "params": { 00:23:45.595 "name": "Nvme5", 00:23:45.595 "trtype": "tcp", 00:23:45.595 "traddr": "10.0.0.2", 00:23:45.595 "adrfam": "ipv4", 00:23:45.595 "trsvcid": "4420", 00:23:45.595 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:45.595 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:45.595 "hdgst": false, 00:23:45.595 "ddgst": false 00:23:45.595 }, 00:23:45.595 "method": "bdev_nvme_attach_controller" 00:23:45.595 },{ 00:23:45.595 "params": { 00:23:45.595 "name": "Nvme6", 00:23:45.595 "trtype": "tcp", 00:23:45.595 "traddr": "10.0.0.2", 00:23:45.595 "adrfam": "ipv4", 00:23:45.595 "trsvcid": "4420", 00:23:45.595 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:45.595 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:45.595 "hdgst": false, 00:23:45.595 "ddgst": false 00:23:45.595 }, 00:23:45.595 "method": "bdev_nvme_attach_controller" 00:23:45.595 },{ 00:23:45.595 "params": { 00:23:45.595 "name": "Nvme7", 00:23:45.595 "trtype": "tcp", 00:23:45.595 "traddr": "10.0.0.2", 00:23:45.595 "adrfam": "ipv4", 00:23:45.595 "trsvcid": "4420", 00:23:45.595 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:45.595 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:45.595 "hdgst": false, 00:23:45.595 "ddgst": false 00:23:45.595 }, 00:23:45.595 "method": "bdev_nvme_attach_controller" 00:23:45.595 },{ 00:23:45.595 "params": { 00:23:45.595 "name": "Nvme8", 00:23:45.595 "trtype": "tcp", 00:23:45.595 "traddr": "10.0.0.2", 00:23:45.595 "adrfam": "ipv4", 00:23:45.595 "trsvcid": "4420", 00:23:45.595 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:45.595 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:45.595 "hdgst": false, 00:23:45.595 "ddgst": false 00:23:45.595 }, 00:23:45.595 "method": "bdev_nvme_attach_controller" 00:23:45.595 },{ 00:23:45.595 "params": { 00:23:45.595 "name": "Nvme9", 00:23:45.595 "trtype": "tcp", 00:23:45.595 "traddr": "10.0.0.2", 00:23:45.595 "adrfam": "ipv4", 00:23:45.595 "trsvcid": "4420", 00:23:45.595 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:45.595 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:45.595 "hdgst": false, 00:23:45.595 "ddgst": false 00:23:45.595 }, 00:23:45.595 "method": "bdev_nvme_attach_controller" 00:23:45.595 },{ 00:23:45.595 "params": { 00:23:45.595 "name": "Nvme10", 00:23:45.595 "trtype": "tcp", 00:23:45.595 "traddr": "10.0.0.2", 00:23:45.595 "adrfam": "ipv4", 00:23:45.595 "trsvcid": "4420", 00:23:45.595 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:45.595 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:45.595 "hdgst": false, 00:23:45.595 "ddgst": false 00:23:45.595 }, 00:23:45.595 "method": "bdev_nvme_attach_controller" 00:23:45.595 }' 00:23:45.595 [2024-04-26 15:04:31.239279] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:23:45.595 [2024-04-26 15:04:31.239385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840089 ] 00:23:45.595 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.595 [2024-04-26 15:04:31.274805] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:45.595 [2024-04-26 15:04:31.304255] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.854 [2024-04-26 15:04:31.394133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.228 Running I/O for 1 seconds... 00:23:48.601 00:23:48.601 Latency(us) 00:23:48.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.601 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.601 Verification LBA range: start 0x0 length 0x400 00:23:48.601 Nvme1n1 : 1.12 233.03 14.56 0.00 0.00 270475.19 6019.60 260978.92 00:23:48.601 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.601 Verification LBA range: start 0x0 length 0x400 00:23:48.601 Nvme2n1 : 1.14 225.24 14.08 0.00 0.00 275617.37 20000.62 268746.15 00:23:48.601 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.601 Verification LBA range: start 0x0 length 0x400 00:23:48.601 Nvme3n1 : 1.10 235.48 14.72 0.00 0.00 258069.38 9126.49 250104.79 00:23:48.601 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.601 Verification LBA range: start 0x0 length 0x400 00:23:48.601 Nvme4n1 : 1.11 234.23 14.64 0.00 0.00 255604.78 7087.60 262532.36 00:23:48.601 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.601 Verification LBA range: start 0x0 length 0x400 00:23:48.601 Nvme5n1 : 1.13 226.06 14.13 0.00 0.00 262758.78 21456.97 270299.59 00:23:48.601 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.601 Verification LBA range: start 0x0 length 0x400 00:23:48.601 Nvme6n1 : 1.12 228.68 14.29 0.00 0.00 254945.47 19418.07 265639.25 00:23:48.601 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.601 Verification LBA range: start 0x0 length 0x400 00:23:48.601 Nvme7n1 : 1.13 230.62 14.41 0.00 0.00 248204.20 2633.58 267192.70 00:23:48.601 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.601 Verification LBA range: start 0x0 length 0x400 00:23:48.601 Nvme8n1 : 1.14 224.24 14.01 0.00 0.00 251832.32 17282.09 271853.04 00:23:48.601 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.601 Verification LBA range: start 0x0 length 0x400 00:23:48.601 Nvme9n1 : 1.15 224.78 14.05 0.00 0.00 246979.98 849.54 268746.15 00:23:48.601 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.601 Verification LBA range: start 0x0 length 0x400 00:23:48.601 Nvme10n1 : 1.19 269.40 16.84 0.00 0.00 203602.19 5971.06 288940.94 00:23:48.601 =================================================================================================================== 00:23:48.601 Total : 2331.75 145.73 0.00 0.00 251644.54 849.54 288940.94 00:23:48.601 15:04:34 -- target/shutdown.sh@94 -- # stoptarget 00:23:48.601 15:04:34 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:48.601 15:04:34 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:48.601 15:04:34 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:48.601 15:04:34 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:48.601 15:04:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:48.601 15:04:34 -- nvmf/common.sh@117 -- # sync 00:23:48.601 15:04:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:48.601 15:04:34 -- nvmf/common.sh@120 -- # set +e 00:23:48.601 15:04:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:48.601 15:04:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:48.601 rmmod nvme_tcp 00:23:48.601 rmmod nvme_fabrics 00:23:48.601 rmmod nvme_keyring 00:23:48.601 15:04:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:48.601 15:04:34 -- nvmf/common.sh@124 -- # set -e 00:23:48.601 15:04:34 -- nvmf/common.sh@125 -- # return 0 00:23:48.601 15:04:34 -- nvmf/common.sh@478 -- # '[' -n 3839492 ']' 00:23:48.601 15:04:34 -- nvmf/common.sh@479 -- # killprocess 3839492 00:23:48.601 15:04:34 -- common/autotest_common.sh@936 -- # '[' -z 3839492 ']' 00:23:48.601 15:04:34 -- common/autotest_common.sh@940 -- # kill -0 3839492 00:23:48.601 15:04:34 -- common/autotest_common.sh@941 -- # uname 00:23:48.601 15:04:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:48.601 15:04:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3839492 00:23:48.601 15:04:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:48.601 15:04:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:48.601 15:04:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3839492' 00:23:48.601 killing process with pid 3839492 00:23:48.601 15:04:34 -- common/autotest_common.sh@955 -- # kill 3839492 00:23:48.601 15:04:34 -- common/autotest_common.sh@960 -- # wait 3839492 00:23:49.168 15:04:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:49.168 15:04:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:49.168 15:04:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:49.168 15:04:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:49.168 15:04:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:49.168 15:04:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.168 15:04:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:49.168 15:04:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.069 15:04:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:51.069 00:23:51.069 real 0m11.737s 00:23:51.069 user 0m33.915s 00:23:51.069 sys 0m3.306s 00:23:51.069 15:04:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:51.069 15:04:36 -- common/autotest_common.sh@10 -- # set +x 00:23:51.069 ************************************ 00:23:51.069 END TEST nvmf_shutdown_tc1 00:23:51.069 ************************************ 00:23:51.327 15:04:36 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:51.327 15:04:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:51.327 15:04:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:51.327 15:04:36 -- common/autotest_common.sh@10 -- # set +x 00:23:51.327 ************************************ 00:23:51.327 START TEST nvmf_shutdown_tc2 00:23:51.327 ************************************ 00:23:51.327 15:04:36 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:23:51.327 15:04:36 -- target/shutdown.sh@99 -- # starttarget 00:23:51.327 15:04:36 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:51.327 15:04:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:51.327 15:04:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.327 15:04:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:51.327 15:04:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:51.327 15:04:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:51.327 15:04:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.327 15:04:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.327 15:04:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.327 15:04:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:51.327 15:04:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:51.327 15:04:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:51.327 15:04:36 -- common/autotest_common.sh@10 -- # set +x 00:23:51.327 15:04:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:51.327 15:04:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:51.327 15:04:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:51.327 15:04:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:51.327 15:04:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:51.327 15:04:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:51.327 15:04:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:51.327 15:04:36 -- nvmf/common.sh@295 -- # net_devs=() 00:23:51.327 15:04:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:51.327 15:04:36 -- nvmf/common.sh@296 -- # e810=() 00:23:51.327 15:04:36 -- nvmf/common.sh@296 -- # local -ga e810 00:23:51.327 15:04:36 -- nvmf/common.sh@297 -- # x722=() 00:23:51.327 15:04:36 -- nvmf/common.sh@297 -- # local -ga x722 00:23:51.327 15:04:36 -- nvmf/common.sh@298 -- # mlx=() 00:23:51.327 15:04:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:51.327 15:04:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.327 15:04:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.327 15:04:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.327 15:04:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.327 15:04:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.327 15:04:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.327 15:04:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.327 15:04:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.327 15:04:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.327 15:04:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.327 15:04:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.327 15:04:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:51.327 15:04:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:51.327 15:04:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:51.327 15:04:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:51.327 15:04:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:51.327 15:04:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:51.327 15:04:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.327 15:04:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:51.327 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:51.327 15:04:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.327 15:04:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.327 15:04:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.327 15:04:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.328 15:04:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.328 15:04:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.328 15:04:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:51.328 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:51.328 15:04:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.328 15:04:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.328 15:04:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.328 15:04:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.328 15:04:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.328 15:04:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:51.328 15:04:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:51.328 15:04:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:51.328 15:04:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.328 15:04:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.328 15:04:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:51.328 15:04:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.328 15:04:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:51.328 Found net devices under 0000:84:00.0: cvl_0_0 00:23:51.328 15:04:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.328 15:04:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.328 15:04:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.328 15:04:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:51.328 15:04:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.328 15:04:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:51.328 Found net devices under 0000:84:00.1: cvl_0_1 00:23:51.328 15:04:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.328 15:04:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:51.328 15:04:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:51.328 15:04:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:51.328 15:04:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:51.328 15:04:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:51.328 15:04:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.328 15:04:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.328 15:04:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.328 15:04:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:51.328 15:04:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.328 15:04:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.328 15:04:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:51.328 15:04:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.328 15:04:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.328 15:04:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:51.328 15:04:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:51.328 15:04:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.328 15:04:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.328 15:04:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.328 15:04:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.328 15:04:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:51.328 15:04:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.328 15:04:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.328 15:04:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.328 15:04:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:51.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:23:51.328 00:23:51.328 --- 10.0.0.2 ping statistics --- 00:23:51.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.328 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:23:51.328 15:04:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:23:51.586 00:23:51.586 --- 10.0.0.1 ping statistics --- 00:23:51.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.586 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:23:51.586 15:04:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.586 15:04:37 -- nvmf/common.sh@411 -- # return 0 00:23:51.586 15:04:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:51.586 15:04:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.586 15:04:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:51.586 15:04:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:51.586 15:04:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.586 15:04:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:51.586 15:04:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:51.586 15:04:37 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:51.586 15:04:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:51.586 15:04:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:51.586 15:04:37 -- common/autotest_common.sh@10 -- # set +x 00:23:51.586 15:04:37 -- nvmf/common.sh@470 -- # nvmfpid=3840858 00:23:51.586 15:04:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:51.586 15:04:37 -- nvmf/common.sh@471 -- # waitforlisten 3840858 00:23:51.586 15:04:37 -- common/autotest_common.sh@817 -- # '[' -z 3840858 ']' 00:23:51.586 15:04:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.586 15:04:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:51.586 15:04:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.586 15:04:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:51.586 15:04:37 -- common/autotest_common.sh@10 -- # set +x 00:23:51.586 [2024-04-26 15:04:37.138767] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:23:51.586 [2024-04-26 15:04:37.138872] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.586 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.586 [2024-04-26 15:04:37.177986] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:51.586 [2024-04-26 15:04:37.205256] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:51.586 [2024-04-26 15:04:37.293269] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.586 [2024-04-26 15:04:37.293340] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.586 [2024-04-26 15:04:37.293362] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.586 [2024-04-26 15:04:37.293374] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.586 [2024-04-26 15:04:37.293384] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.586 [2024-04-26 15:04:37.293472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.586 [2024-04-26 15:04:37.293535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:51.587 [2024-04-26 15:04:37.293603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:51.587 [2024-04-26 15:04:37.293605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.845 15:04:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:51.845 15:04:37 -- common/autotest_common.sh@850 -- # return 0 00:23:51.845 15:04:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:51.845 15:04:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:51.845 15:04:37 -- common/autotest_common.sh@10 -- # set +x 00:23:51.845 15:04:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.845 15:04:37 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:51.845 15:04:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.845 15:04:37 -- common/autotest_common.sh@10 -- # set +x 00:23:51.845 [2024-04-26 15:04:37.439701] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.845 15:04:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.845 15:04:37 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:51.845 15:04:37 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:51.845 15:04:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:51.845 15:04:37 -- common/autotest_common.sh@10 -- # set +x 00:23:51.845 15:04:37 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:51.845 15:04:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.845 15:04:37 -- target/shutdown.sh@28 -- # cat 00:23:51.845 15:04:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.845 15:04:37 -- target/shutdown.sh@28 -- # cat 00:23:51.845 15:04:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.845 15:04:37 -- target/shutdown.sh@28 -- # cat 00:23:51.845 15:04:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.845 15:04:37 -- target/shutdown.sh@28 -- # cat 00:23:51.845 15:04:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.845 15:04:37 -- target/shutdown.sh@28 -- # cat 00:23:51.845 15:04:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.845 15:04:37 -- target/shutdown.sh@28 -- # cat 00:23:51.845 15:04:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.845 15:04:37 -- target/shutdown.sh@28 -- # cat 00:23:51.845 15:04:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.845 15:04:37 -- target/shutdown.sh@28 -- # cat 00:23:51.845 15:04:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.845 15:04:37 -- target/shutdown.sh@28 -- # cat 00:23:51.845 15:04:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:51.845 15:04:37 -- target/shutdown.sh@28 -- # cat 00:23:51.845 15:04:37 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:51.845 15:04:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.845 15:04:37 -- common/autotest_common.sh@10 -- # set +x 00:23:51.845 Malloc1 00:23:51.845 [2024-04-26 15:04:37.522980] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.845 Malloc2 00:23:52.104 Malloc3 00:23:52.104 Malloc4 00:23:52.104 Malloc5 00:23:52.104 Malloc6 00:23:52.104 Malloc7 00:23:52.104 Malloc8 00:23:52.363 Malloc9 00:23:52.363 Malloc10 00:23:52.363 15:04:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.363 15:04:37 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:52.363 15:04:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:52.363 15:04:37 -- common/autotest_common.sh@10 -- # set +x 00:23:52.363 15:04:37 -- target/shutdown.sh@103 -- # perfpid=3841038 00:23:52.363 15:04:37 -- target/shutdown.sh@104 -- # waitforlisten 3841038 /var/tmp/bdevperf.sock 00:23:52.363 15:04:37 -- common/autotest_common.sh@817 -- # '[' -z 3841038 ']' 00:23:52.363 15:04:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.363 15:04:37 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:52.363 15:04:37 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:52.363 15:04:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:52.363 15:04:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.363 15:04:37 -- nvmf/common.sh@521 -- # config=() 00:23:52.363 15:04:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:52.363 15:04:37 -- nvmf/common.sh@521 -- # local subsystem config 00:23:52.363 15:04:37 -- common/autotest_common.sh@10 -- # set +x 00:23:52.363 15:04:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:52.363 15:04:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:52.363 { 00:23:52.363 "params": { 00:23:52.363 "name": "Nvme$subsystem", 00:23:52.363 "trtype": "$TEST_TRANSPORT", 00:23:52.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.363 "adrfam": "ipv4", 00:23:52.363 "trsvcid": "$NVMF_PORT", 00:23:52.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.363 "hdgst": ${hdgst:-false}, 00:23:52.363 "ddgst": ${ddgst:-false} 00:23:52.363 }, 00:23:52.363 "method": "bdev_nvme_attach_controller" 00:23:52.363 } 00:23:52.363 EOF 00:23:52.363 )") 00:23:52.363 15:04:37 -- nvmf/common.sh@543 -- # cat 00:23:52.363 15:04:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:52.363 15:04:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:52.363 { 00:23:52.363 "params": { 00:23:52.363 "name": "Nvme$subsystem", 00:23:52.363 "trtype": "$TEST_TRANSPORT", 00:23:52.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.363 "adrfam": "ipv4", 00:23:52.363 "trsvcid": "$NVMF_PORT", 00:23:52.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.363 "hdgst": ${hdgst:-false}, 00:23:52.363 "ddgst": ${ddgst:-false} 00:23:52.363 }, 00:23:52.363 "method": "bdev_nvme_attach_controller" 00:23:52.363 } 00:23:52.363 EOF 00:23:52.363 )") 00:23:52.363 15:04:37 -- nvmf/common.sh@543 -- # cat 00:23:52.363 15:04:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:52.363 15:04:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:52.363 { 00:23:52.363 "params": { 00:23:52.363 "name": "Nvme$subsystem", 00:23:52.363 "trtype": "$TEST_TRANSPORT", 00:23:52.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.363 "adrfam": "ipv4", 00:23:52.363 "trsvcid": "$NVMF_PORT", 00:23:52.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.363 "hdgst": ${hdgst:-false}, 00:23:52.363 "ddgst": ${ddgst:-false} 00:23:52.363 }, 00:23:52.363 "method": "bdev_nvme_attach_controller" 00:23:52.363 } 00:23:52.363 EOF 00:23:52.363 )") 00:23:52.363 15:04:37 -- nvmf/common.sh@543 -- # cat 00:23:52.363 15:04:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:52.363 15:04:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:52.363 { 00:23:52.363 "params": { 00:23:52.363 "name": "Nvme$subsystem", 00:23:52.363 "trtype": "$TEST_TRANSPORT", 00:23:52.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.363 "adrfam": "ipv4", 00:23:52.363 "trsvcid": "$NVMF_PORT", 00:23:52.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.363 "hdgst": ${hdgst:-false}, 00:23:52.363 "ddgst": ${ddgst:-false} 00:23:52.363 }, 00:23:52.363 "method": "bdev_nvme_attach_controller" 00:23:52.363 } 00:23:52.363 EOF 00:23:52.363 )") 00:23:52.363 15:04:37 -- nvmf/common.sh@543 -- # cat 00:23:52.363 15:04:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:52.363 15:04:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:52.363 { 00:23:52.363 "params": { 00:23:52.363 "name": "Nvme$subsystem", 00:23:52.363 "trtype": "$TEST_TRANSPORT", 00:23:52.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.363 "adrfam": "ipv4", 00:23:52.363 "trsvcid": "$NVMF_PORT", 00:23:52.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.363 "hdgst": ${hdgst:-false}, 00:23:52.363 "ddgst": ${ddgst:-false} 00:23:52.363 }, 00:23:52.363 "method": "bdev_nvme_attach_controller" 00:23:52.363 } 00:23:52.363 EOF 00:23:52.363 )") 00:23:52.363 15:04:37 -- nvmf/common.sh@543 -- # cat 00:23:52.363 15:04:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:52.363 15:04:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:52.363 { 00:23:52.363 "params": { 00:23:52.363 "name": "Nvme$subsystem", 00:23:52.363 "trtype": "$TEST_TRANSPORT", 00:23:52.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.363 "adrfam": "ipv4", 00:23:52.363 "trsvcid": "$NVMF_PORT", 00:23:52.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.363 "hdgst": ${hdgst:-false}, 00:23:52.363 "ddgst": ${ddgst:-false} 00:23:52.363 }, 00:23:52.363 "method": "bdev_nvme_attach_controller" 00:23:52.363 } 00:23:52.363 EOF 00:23:52.363 )") 00:23:52.363 15:04:37 -- nvmf/common.sh@543 -- # cat 00:23:52.363 15:04:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:52.363 15:04:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:52.363 { 00:23:52.363 "params": { 00:23:52.363 "name": "Nvme$subsystem", 00:23:52.363 "trtype": "$TEST_TRANSPORT", 00:23:52.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.363 "adrfam": "ipv4", 00:23:52.363 "trsvcid": "$NVMF_PORT", 00:23:52.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.363 "hdgst": ${hdgst:-false}, 00:23:52.363 "ddgst": ${ddgst:-false} 00:23:52.363 }, 00:23:52.363 "method": "bdev_nvme_attach_controller" 00:23:52.363 } 00:23:52.363 EOF 00:23:52.363 )") 00:23:52.363 15:04:37 -- nvmf/common.sh@543 -- # cat 00:23:52.363 15:04:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:52.363 15:04:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:52.363 { 00:23:52.363 "params": { 00:23:52.363 "name": "Nvme$subsystem", 00:23:52.363 "trtype": "$TEST_TRANSPORT", 00:23:52.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.363 "adrfam": "ipv4", 00:23:52.363 "trsvcid": "$NVMF_PORT", 00:23:52.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.363 "hdgst": ${hdgst:-false}, 00:23:52.363 "ddgst": ${ddgst:-false} 00:23:52.363 }, 00:23:52.363 "method": "bdev_nvme_attach_controller" 00:23:52.363 } 00:23:52.363 EOF 00:23:52.363 )") 00:23:52.363 15:04:37 -- nvmf/common.sh@543 -- # cat 00:23:52.363 15:04:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:52.363 15:04:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:52.363 { 00:23:52.363 "params": { 00:23:52.364 "name": "Nvme$subsystem", 00:23:52.364 "trtype": "$TEST_TRANSPORT", 00:23:52.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.364 "adrfam": "ipv4", 00:23:52.364 "trsvcid": "$NVMF_PORT", 00:23:52.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.364 "hdgst": ${hdgst:-false}, 00:23:52.364 "ddgst": ${ddgst:-false} 00:23:52.364 }, 00:23:52.364 "method": "bdev_nvme_attach_controller" 00:23:52.364 } 00:23:52.364 EOF 00:23:52.364 )") 00:23:52.364 15:04:37 -- nvmf/common.sh@543 -- # cat 00:23:52.364 15:04:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:52.364 15:04:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:52.364 { 00:23:52.364 "params": { 00:23:52.364 "name": "Nvme$subsystem", 00:23:52.364 "trtype": "$TEST_TRANSPORT", 00:23:52.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.364 "adrfam": "ipv4", 00:23:52.364 "trsvcid": "$NVMF_PORT", 00:23:52.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.364 "hdgst": ${hdgst:-false}, 00:23:52.364 "ddgst": ${ddgst:-false} 00:23:52.364 }, 00:23:52.364 "method": "bdev_nvme_attach_controller" 00:23:52.364 } 00:23:52.364 EOF 00:23:52.364 )") 00:23:52.364 15:04:38 -- nvmf/common.sh@543 -- # cat 00:23:52.364 15:04:38 -- nvmf/common.sh@545 -- # jq . 00:23:52.364 15:04:38 -- nvmf/common.sh@546 -- # IFS=, 00:23:52.364 15:04:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:52.364 "params": { 00:23:52.364 "name": "Nvme1", 00:23:52.364 "trtype": "tcp", 00:23:52.364 "traddr": "10.0.0.2", 00:23:52.364 "adrfam": "ipv4", 00:23:52.364 "trsvcid": "4420", 00:23:52.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.364 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.364 "hdgst": false, 00:23:52.364 "ddgst": false 00:23:52.364 }, 00:23:52.364 "method": "bdev_nvme_attach_controller" 00:23:52.364 },{ 00:23:52.364 "params": { 00:23:52.364 "name": "Nvme2", 00:23:52.364 "trtype": "tcp", 00:23:52.364 "traddr": "10.0.0.2", 00:23:52.364 "adrfam": "ipv4", 00:23:52.364 "trsvcid": "4420", 00:23:52.364 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:52.364 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:52.364 "hdgst": false, 00:23:52.364 "ddgst": false 00:23:52.364 }, 00:23:52.364 "method": "bdev_nvme_attach_controller" 00:23:52.364 },{ 00:23:52.364 "params": { 00:23:52.364 "name": "Nvme3", 00:23:52.364 "trtype": "tcp", 00:23:52.364 "traddr": "10.0.0.2", 00:23:52.364 "adrfam": "ipv4", 00:23:52.364 "trsvcid": "4420", 00:23:52.364 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:52.364 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:52.364 "hdgst": false, 00:23:52.364 "ddgst": false 00:23:52.364 }, 00:23:52.364 "method": "bdev_nvme_attach_controller" 00:23:52.364 },{ 00:23:52.364 "params": { 00:23:52.364 "name": "Nvme4", 00:23:52.364 "trtype": "tcp", 00:23:52.364 "traddr": "10.0.0.2", 00:23:52.364 "adrfam": "ipv4", 00:23:52.364 "trsvcid": "4420", 00:23:52.364 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:52.364 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:52.364 "hdgst": false, 00:23:52.364 "ddgst": false 00:23:52.364 }, 00:23:52.364 "method": "bdev_nvme_attach_controller" 00:23:52.364 },{ 00:23:52.364 "params": { 00:23:52.364 "name": "Nvme5", 00:23:52.364 "trtype": "tcp", 00:23:52.364 "traddr": "10.0.0.2", 00:23:52.364 "adrfam": "ipv4", 00:23:52.364 "trsvcid": "4420", 00:23:52.364 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:52.364 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:52.364 "hdgst": false, 00:23:52.364 "ddgst": false 00:23:52.364 }, 00:23:52.364 "method": "bdev_nvme_attach_controller" 00:23:52.364 },{ 00:23:52.364 "params": { 00:23:52.364 "name": "Nvme6", 00:23:52.364 "trtype": "tcp", 00:23:52.364 "traddr": "10.0.0.2", 00:23:52.364 "adrfam": "ipv4", 00:23:52.364 "trsvcid": "4420", 00:23:52.364 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:52.364 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:52.364 "hdgst": false, 00:23:52.364 "ddgst": false 00:23:52.364 }, 00:23:52.364 "method": "bdev_nvme_attach_controller" 00:23:52.364 },{ 00:23:52.364 "params": { 00:23:52.364 "name": "Nvme7", 00:23:52.364 "trtype": "tcp", 00:23:52.364 "traddr": "10.0.0.2", 00:23:52.364 "adrfam": "ipv4", 00:23:52.364 "trsvcid": "4420", 00:23:52.364 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:52.364 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:52.364 "hdgst": false, 00:23:52.364 "ddgst": false 00:23:52.364 }, 00:23:52.364 "method": "bdev_nvme_attach_controller" 00:23:52.364 },{ 00:23:52.364 "params": { 00:23:52.364 "name": "Nvme8", 00:23:52.364 "trtype": "tcp", 00:23:52.364 "traddr": "10.0.0.2", 00:23:52.364 "adrfam": "ipv4", 00:23:52.364 "trsvcid": "4420", 00:23:52.364 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:52.364 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:52.364 "hdgst": false, 00:23:52.364 "ddgst": false 00:23:52.364 }, 00:23:52.364 "method": "bdev_nvme_attach_controller" 00:23:52.364 },{ 00:23:52.364 "params": { 00:23:52.364 "name": "Nvme9", 00:23:52.364 "trtype": "tcp", 00:23:52.364 "traddr": "10.0.0.2", 00:23:52.364 "adrfam": "ipv4", 00:23:52.364 "trsvcid": "4420", 00:23:52.364 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:52.364 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:52.364 "hdgst": false, 00:23:52.364 "ddgst": false 00:23:52.364 }, 00:23:52.364 "method": "bdev_nvme_attach_controller" 00:23:52.364 },{ 00:23:52.364 "params": { 00:23:52.364 "name": "Nvme10", 00:23:52.364 "trtype": "tcp", 00:23:52.364 "traddr": "10.0.0.2", 00:23:52.364 "adrfam": "ipv4", 00:23:52.364 "trsvcid": "4420", 00:23:52.364 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:52.364 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:52.364 "hdgst": false, 00:23:52.364 "ddgst": false 00:23:52.364 }, 00:23:52.364 "method": "bdev_nvme_attach_controller" 00:23:52.364 }' 00:23:52.364 [2024-04-26 15:04:38.016872] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:23:52.364 [2024-04-26 15:04:38.016944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3841038 ] 00:23:52.364 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.364 [2024-04-26 15:04:38.052110] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:52.364 [2024-04-26 15:04:38.081507] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.623 [2024-04-26 15:04:38.167422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.523 Running I/O for 10 seconds... 00:23:54.523 15:04:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:54.523 15:04:40 -- common/autotest_common.sh@850 -- # return 0 00:23:54.523 15:04:40 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:54.523 15:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.523 15:04:40 -- common/autotest_common.sh@10 -- # set +x 00:23:54.523 15:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.523 15:04:40 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:54.523 15:04:40 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:54.523 15:04:40 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:54.523 15:04:40 -- target/shutdown.sh@57 -- # local ret=1 00:23:54.523 15:04:40 -- target/shutdown.sh@58 -- # local i 00:23:54.523 15:04:40 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:54.523 15:04:40 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:54.523 15:04:40 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:54.523 15:04:40 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:54.523 15:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.523 15:04:40 -- common/autotest_common.sh@10 -- # set +x 00:23:54.523 15:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.523 15:04:40 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:54.523 15:04:40 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:54.523 15:04:40 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:54.782 15:04:40 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:54.782 15:04:40 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:54.782 15:04:40 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:54.782 15:04:40 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:54.782 15:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.782 15:04:40 -- common/autotest_common.sh@10 -- # set +x 00:23:54.782 15:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.782 15:04:40 -- target/shutdown.sh@60 -- # read_io_count=72 00:23:54.782 15:04:40 -- target/shutdown.sh@63 -- # '[' 72 -ge 100 ']' 00:23:54.782 15:04:40 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:55.040 15:04:40 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:55.041 15:04:40 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:55.041 15:04:40 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:55.041 15:04:40 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:55.041 15:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.041 15:04:40 -- common/autotest_common.sh@10 -- # set +x 00:23:55.041 15:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.041 15:04:40 -- target/shutdown.sh@60 -- # read_io_count=141 00:23:55.041 15:04:40 -- target/shutdown.sh@63 -- # '[' 141 -ge 100 ']' 00:23:55.041 15:04:40 -- target/shutdown.sh@64 -- # ret=0 00:23:55.041 15:04:40 -- target/shutdown.sh@65 -- # break 00:23:55.041 15:04:40 -- target/shutdown.sh@69 -- # return 0 00:23:55.041 15:04:40 -- target/shutdown.sh@110 -- # killprocess 3841038 00:23:55.041 15:04:40 -- common/autotest_common.sh@936 -- # '[' -z 3841038 ']' 00:23:55.041 15:04:40 -- common/autotest_common.sh@940 -- # kill -0 3841038 00:23:55.041 15:04:40 -- common/autotest_common.sh@941 -- # uname 00:23:55.041 15:04:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:55.041 15:04:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3841038 00:23:55.041 15:04:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:55.041 15:04:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:55.041 15:04:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3841038' 00:23:55.041 killing process with pid 3841038 00:23:55.041 15:04:40 -- common/autotest_common.sh@955 -- # kill 3841038 00:23:55.041 15:04:40 -- common/autotest_common.sh@960 -- # wait 3841038 00:23:55.299 Received shutdown signal, test time was about 0.956398 seconds 00:23:55.299 00:23:55.299 Latency(us) 00:23:55.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.299 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.299 Verification LBA range: start 0x0 length 0x400 00:23:55.299 Nvme1n1 : 0.95 269.52 16.85 0.00 0.00 232647.11 15825.73 259425.47 00:23:55.299 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.299 Verification LBA range: start 0x0 length 0x400 00:23:55.299 Nvme2n1 : 0.93 206.23 12.89 0.00 0.00 300602.41 19515.16 264085.81 00:23:55.299 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.299 Verification LBA range: start 0x0 length 0x400 00:23:55.300 Nvme3n1 : 0.95 268.76 16.80 0.00 0.00 225880.37 28544.57 256318.58 00:23:55.300 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.300 Verification LBA range: start 0x0 length 0x400 00:23:55.300 Nvme4n1 : 0.96 267.92 16.75 0.00 0.00 221860.79 17282.09 265639.25 00:23:55.300 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.300 Verification LBA range: start 0x0 length 0x400 00:23:55.300 Nvme5n1 : 0.91 210.64 13.17 0.00 0.00 275857.00 33593.27 253211.69 00:23:55.300 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.300 Verification LBA range: start 0x0 length 0x400 00:23:55.300 Nvme6n1 : 0.90 212.31 13.27 0.00 0.00 267522.40 21359.88 259425.47 00:23:55.300 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.300 Verification LBA range: start 0x0 length 0x400 00:23:55.300 Nvme7n1 : 0.92 207.83 12.99 0.00 0.00 267910.00 22913.33 257872.02 00:23:55.300 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.300 Verification LBA range: start 0x0 length 0x400 00:23:55.300 Nvme8n1 : 0.92 209.25 13.08 0.00 0.00 259924.57 29709.65 257872.02 00:23:55.300 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.300 Verification LBA range: start 0x0 length 0x400 00:23:55.300 Nvme9n1 : 0.94 204.38 12.77 0.00 0.00 261608.74 20971.52 279620.27 00:23:55.300 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.300 Verification LBA range: start 0x0 length 0x400 00:23:55.300 Nvme10n1 : 0.94 203.39 12.71 0.00 0.00 257109.21 20777.34 293601.28 00:23:55.300 =================================================================================================================== 00:23:55.300 Total : 2260.24 141.27 0.00 0.00 254338.06 15825.73 293601.28 00:23:55.557 15:04:41 -- target/shutdown.sh@113 -- # sleep 1 00:23:56.492 15:04:42 -- target/shutdown.sh@114 -- # kill -0 3840858 00:23:56.492 15:04:42 -- target/shutdown.sh@116 -- # stoptarget 00:23:56.492 15:04:42 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:56.492 15:04:42 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:56.492 15:04:42 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:56.492 15:04:42 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:56.492 15:04:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:56.492 15:04:42 -- nvmf/common.sh@117 -- # sync 00:23:56.492 15:04:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:56.492 15:04:42 -- nvmf/common.sh@120 -- # set +e 00:23:56.492 15:04:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:56.492 15:04:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:56.492 rmmod nvme_tcp 00:23:56.492 rmmod nvme_fabrics 00:23:56.492 rmmod nvme_keyring 00:23:56.492 15:04:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:56.492 15:04:42 -- nvmf/common.sh@124 -- # set -e 00:23:56.492 15:04:42 -- nvmf/common.sh@125 -- # return 0 00:23:56.492 15:04:42 -- nvmf/common.sh@478 -- # '[' -n 3840858 ']' 00:23:56.492 15:04:42 -- nvmf/common.sh@479 -- # killprocess 3840858 00:23:56.492 15:04:42 -- common/autotest_common.sh@936 -- # '[' -z 3840858 ']' 00:23:56.492 15:04:42 -- common/autotest_common.sh@940 -- # kill -0 3840858 00:23:56.492 15:04:42 -- common/autotest_common.sh@941 -- # uname 00:23:56.492 15:04:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:56.492 15:04:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3840858 00:23:56.492 15:04:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:56.492 15:04:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:56.492 15:04:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3840858' 00:23:56.492 killing process with pid 3840858 00:23:56.492 15:04:42 -- common/autotest_common.sh@955 -- # kill 3840858 00:23:56.492 15:04:42 -- common/autotest_common.sh@960 -- # wait 3840858 00:23:57.059 15:04:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:57.059 15:04:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:57.059 15:04:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:57.059 15:04:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:57.059 15:04:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:57.059 15:04:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.059 15:04:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.059 15:04:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.965 15:04:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:58.965 00:23:58.965 real 0m7.750s 00:23:58.965 user 0m23.739s 00:23:58.965 sys 0m1.504s 00:23:58.965 15:04:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:58.965 15:04:44 -- common/autotest_common.sh@10 -- # set +x 00:23:58.965 ************************************ 00:23:58.965 END TEST nvmf_shutdown_tc2 00:23:58.965 ************************************ 00:23:58.965 15:04:44 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:58.965 15:04:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:58.965 15:04:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:58.965 15:04:44 -- common/autotest_common.sh@10 -- # set +x 00:23:59.223 ************************************ 00:23:59.223 START TEST nvmf_shutdown_tc3 00:23:59.223 ************************************ 00:23:59.223 15:04:44 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:23:59.223 15:04:44 -- target/shutdown.sh@121 -- # starttarget 00:23:59.223 15:04:44 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:59.223 15:04:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:59.223 15:04:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.223 15:04:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:59.223 15:04:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:59.223 15:04:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:59.223 15:04:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.223 15:04:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.223 15:04:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.223 15:04:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:59.223 15:04:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:59.223 15:04:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:59.223 15:04:44 -- common/autotest_common.sh@10 -- # set +x 00:23:59.223 15:04:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:59.223 15:04:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:59.223 15:04:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:59.223 15:04:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:59.223 15:04:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:59.223 15:04:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:59.223 15:04:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:59.223 15:04:44 -- nvmf/common.sh@295 -- # net_devs=() 00:23:59.223 15:04:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:59.223 15:04:44 -- nvmf/common.sh@296 -- # e810=() 00:23:59.223 15:04:44 -- nvmf/common.sh@296 -- # local -ga e810 00:23:59.223 15:04:44 -- nvmf/common.sh@297 -- # x722=() 00:23:59.223 15:04:44 -- nvmf/common.sh@297 -- # local -ga x722 00:23:59.223 15:04:44 -- nvmf/common.sh@298 -- # mlx=() 00:23:59.223 15:04:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:59.223 15:04:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.223 15:04:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.223 15:04:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.223 15:04:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.223 15:04:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.223 15:04:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.223 15:04:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.223 15:04:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.223 15:04:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.223 15:04:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.223 15:04:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.223 15:04:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:59.223 15:04:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:59.223 15:04:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:59.223 15:04:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:59.223 15:04:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:59.223 15:04:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:59.223 15:04:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.223 15:04:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:59.223 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:59.223 15:04:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.223 15:04:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.223 15:04:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.223 15:04:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.223 15:04:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.224 15:04:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.224 15:04:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:59.224 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:59.224 15:04:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.224 15:04:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.224 15:04:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.224 15:04:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.224 15:04:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.224 15:04:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:59.224 15:04:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:59.224 15:04:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:59.224 15:04:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.224 15:04:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.224 15:04:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:59.224 15:04:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.224 15:04:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:59.224 Found net devices under 0000:84:00.0: cvl_0_0 00:23:59.224 15:04:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.224 15:04:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.224 15:04:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.224 15:04:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:59.224 15:04:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.224 15:04:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:59.224 Found net devices under 0000:84:00.1: cvl_0_1 00:23:59.224 15:04:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.224 15:04:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:59.224 15:04:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:59.224 15:04:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:59.224 15:04:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:59.224 15:04:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:59.224 15:04:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.224 15:04:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.224 15:04:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.224 15:04:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:59.224 15:04:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.224 15:04:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.224 15:04:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:59.224 15:04:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.224 15:04:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.224 15:04:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:59.224 15:04:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:59.224 15:04:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.224 15:04:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.224 15:04:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.224 15:04:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.224 15:04:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:59.224 15:04:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:59.224 15:04:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:59.224 15:04:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:59.224 15:04:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:59.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:23:59.224 00:23:59.224 --- 10.0.0.2 ping statistics --- 00:23:59.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.224 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:23:59.224 15:04:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:59.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:23:59.224 00:23:59.224 --- 10.0.0.1 ping statistics --- 00:23:59.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.224 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:23:59.224 15:04:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.224 15:04:44 -- nvmf/common.sh@411 -- # return 0 00:23:59.224 15:04:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:59.224 15:04:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.224 15:04:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:59.224 15:04:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:59.224 15:04:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.224 15:04:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:59.224 15:04:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:59.482 15:04:44 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:59.482 15:04:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:59.482 15:04:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:59.482 15:04:44 -- common/autotest_common.sh@10 -- # set +x 00:23:59.482 15:04:44 -- nvmf/common.sh@470 -- # nvmfpid=3841960 00:23:59.482 15:04:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:59.482 15:04:44 -- nvmf/common.sh@471 -- # waitforlisten 3841960 00:23:59.482 15:04:44 -- common/autotest_common.sh@817 -- # '[' -z 3841960 ']' 00:23:59.482 15:04:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.482 15:04:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:59.482 15:04:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.482 15:04:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:59.482 15:04:44 -- common/autotest_common.sh@10 -- # set +x 00:23:59.482 [2024-04-26 15:04:45.015245] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:23:59.482 [2024-04-26 15:04:45.015318] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.482 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.482 [2024-04-26 15:04:45.052505] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:59.482 [2024-04-26 15:04:45.080112] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:59.482 [2024-04-26 15:04:45.166569] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.482 [2024-04-26 15:04:45.166626] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.482 [2024-04-26 15:04:45.166658] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.482 [2024-04-26 15:04:45.166671] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.482 [2024-04-26 15:04:45.166682] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.482 [2024-04-26 15:04:45.166835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.482 [2024-04-26 15:04:45.166880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:59.482 [2024-04-26 15:04:45.166936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:59.482 [2024-04-26 15:04:45.166939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.740 15:04:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:59.740 15:04:45 -- common/autotest_common.sh@850 -- # return 0 00:23:59.740 15:04:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:59.740 15:04:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:59.740 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:23:59.740 15:04:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.740 15:04:45 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:59.740 15:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.740 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:23:59.740 [2024-04-26 15:04:45.338001] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.740 15:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.740 15:04:45 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:59.740 15:04:45 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:59.740 15:04:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:59.740 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:23:59.740 15:04:45 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:59.740 15:04:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.740 15:04:45 -- target/shutdown.sh@28 -- # cat 00:23:59.740 15:04:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.740 15:04:45 -- target/shutdown.sh@28 -- # cat 00:23:59.740 15:04:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.740 15:04:45 -- target/shutdown.sh@28 -- # cat 00:23:59.740 15:04:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.740 15:04:45 -- target/shutdown.sh@28 -- # cat 00:23:59.740 15:04:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.740 15:04:45 -- target/shutdown.sh@28 -- # cat 00:23:59.740 15:04:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.740 15:04:45 -- target/shutdown.sh@28 -- # cat 00:23:59.740 15:04:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.740 15:04:45 -- target/shutdown.sh@28 -- # cat 00:23:59.740 15:04:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.740 15:04:45 -- target/shutdown.sh@28 -- # cat 00:23:59.740 15:04:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.740 15:04:45 -- target/shutdown.sh@28 -- # cat 00:23:59.740 15:04:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.740 15:04:45 -- target/shutdown.sh@28 -- # cat 00:23:59.740 15:04:45 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:59.740 15:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.740 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:23:59.740 Malloc1 00:23:59.740 [2024-04-26 15:04:45.427947] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.740 Malloc2 00:23:59.999 Malloc3 00:23:59.999 Malloc4 00:23:59.999 Malloc5 00:23:59.999 Malloc6 00:23:59.999 Malloc7 00:24:00.277 Malloc8 00:24:00.277 Malloc9 00:24:00.277 Malloc10 00:24:00.277 15:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.277 15:04:45 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:00.277 15:04:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:00.277 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:24:00.277 15:04:45 -- target/shutdown.sh@125 -- # perfpid=3842131 00:24:00.277 15:04:45 -- target/shutdown.sh@126 -- # waitforlisten 3842131 /var/tmp/bdevperf.sock 00:24:00.277 15:04:45 -- common/autotest_common.sh@817 -- # '[' -z 3842131 ']' 00:24:00.277 15:04:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.277 15:04:45 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:00.277 15:04:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:00.277 15:04:45 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:00.277 15:04:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.277 15:04:45 -- nvmf/common.sh@521 -- # config=() 00:24:00.277 15:04:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:00.277 15:04:45 -- nvmf/common.sh@521 -- # local subsystem config 00:24:00.277 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:24:00.277 15:04:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:00.277 15:04:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:00.277 { 00:24:00.277 "params": { 00:24:00.277 "name": "Nvme$subsystem", 00:24:00.278 "trtype": "$TEST_TRANSPORT", 00:24:00.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.278 "adrfam": "ipv4", 00:24:00.278 "trsvcid": "$NVMF_PORT", 00:24:00.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.278 "hdgst": ${hdgst:-false}, 00:24:00.278 "ddgst": ${ddgst:-false} 00:24:00.278 }, 00:24:00.278 "method": "bdev_nvme_attach_controller" 00:24:00.278 } 00:24:00.278 EOF 00:24:00.278 )") 00:24:00.278 15:04:45 -- nvmf/common.sh@543 -- # cat 00:24:00.278 15:04:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:00.278 15:04:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:00.278 { 00:24:00.278 "params": { 00:24:00.278 "name": "Nvme$subsystem", 00:24:00.278 "trtype": "$TEST_TRANSPORT", 00:24:00.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.278 "adrfam": "ipv4", 00:24:00.278 "trsvcid": "$NVMF_PORT", 00:24:00.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.278 "hdgst": ${hdgst:-false}, 00:24:00.278 "ddgst": ${ddgst:-false} 00:24:00.278 }, 00:24:00.278 "method": "bdev_nvme_attach_controller" 00:24:00.278 } 00:24:00.278 EOF 00:24:00.278 )") 00:24:00.278 15:04:45 -- nvmf/common.sh@543 -- # cat 00:24:00.278 15:04:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:00.278 15:04:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:00.278 { 00:24:00.278 "params": { 00:24:00.278 "name": "Nvme$subsystem", 00:24:00.278 "trtype": "$TEST_TRANSPORT", 00:24:00.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.278 "adrfam": "ipv4", 00:24:00.278 "trsvcid": "$NVMF_PORT", 00:24:00.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.278 "hdgst": ${hdgst:-false}, 00:24:00.278 "ddgst": ${ddgst:-false} 00:24:00.278 }, 00:24:00.278 "method": "bdev_nvme_attach_controller" 00:24:00.278 } 00:24:00.278 EOF 00:24:00.278 )") 00:24:00.278 15:04:45 -- nvmf/common.sh@543 -- # cat 00:24:00.278 15:04:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:00.278 15:04:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:00.278 { 00:24:00.278 "params": { 00:24:00.278 "name": "Nvme$subsystem", 00:24:00.278 "trtype": "$TEST_TRANSPORT", 00:24:00.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.278 "adrfam": "ipv4", 00:24:00.278 "trsvcid": "$NVMF_PORT", 00:24:00.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.278 "hdgst": ${hdgst:-false}, 00:24:00.278 "ddgst": ${ddgst:-false} 00:24:00.278 }, 00:24:00.278 "method": "bdev_nvme_attach_controller" 00:24:00.278 } 00:24:00.278 EOF 00:24:00.278 )") 00:24:00.278 15:04:45 -- nvmf/common.sh@543 -- # cat 00:24:00.278 15:04:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:00.278 15:04:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:00.278 { 00:24:00.278 "params": { 00:24:00.278 "name": "Nvme$subsystem", 00:24:00.278 "trtype": "$TEST_TRANSPORT", 00:24:00.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.278 "adrfam": "ipv4", 00:24:00.278 "trsvcid": "$NVMF_PORT", 00:24:00.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.278 "hdgst": ${hdgst:-false}, 00:24:00.278 "ddgst": ${ddgst:-false} 00:24:00.278 }, 00:24:00.278 "method": "bdev_nvme_attach_controller" 00:24:00.278 } 00:24:00.278 EOF 00:24:00.278 )") 00:24:00.278 15:04:45 -- nvmf/common.sh@543 -- # cat 00:24:00.278 15:04:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:00.278 15:04:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:00.278 { 00:24:00.278 "params": { 00:24:00.278 "name": "Nvme$subsystem", 00:24:00.278 "trtype": "$TEST_TRANSPORT", 00:24:00.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.278 "adrfam": "ipv4", 00:24:00.278 "trsvcid": "$NVMF_PORT", 00:24:00.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.278 "hdgst": ${hdgst:-false}, 00:24:00.278 "ddgst": ${ddgst:-false} 00:24:00.278 }, 00:24:00.278 "method": "bdev_nvme_attach_controller" 00:24:00.278 } 00:24:00.278 EOF 00:24:00.278 )") 00:24:00.278 15:04:45 -- nvmf/common.sh@543 -- # cat 00:24:00.278 15:04:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:00.278 15:04:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:00.278 { 00:24:00.278 "params": { 00:24:00.278 "name": "Nvme$subsystem", 00:24:00.278 "trtype": "$TEST_TRANSPORT", 00:24:00.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.278 "adrfam": "ipv4", 00:24:00.278 "trsvcid": "$NVMF_PORT", 00:24:00.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.278 "hdgst": ${hdgst:-false}, 00:24:00.278 "ddgst": ${ddgst:-false} 00:24:00.278 }, 00:24:00.278 "method": "bdev_nvme_attach_controller" 00:24:00.278 } 00:24:00.278 EOF 00:24:00.278 )") 00:24:00.278 15:04:45 -- nvmf/common.sh@543 -- # cat 00:24:00.278 15:04:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:00.278 15:04:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:00.278 { 00:24:00.278 "params": { 00:24:00.278 "name": "Nvme$subsystem", 00:24:00.278 "trtype": "$TEST_TRANSPORT", 00:24:00.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.278 "adrfam": "ipv4", 00:24:00.278 "trsvcid": "$NVMF_PORT", 00:24:00.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.278 "hdgst": ${hdgst:-false}, 00:24:00.278 "ddgst": ${ddgst:-false} 00:24:00.278 }, 00:24:00.278 "method": "bdev_nvme_attach_controller" 00:24:00.278 } 00:24:00.278 EOF 00:24:00.278 )") 00:24:00.278 15:04:45 -- nvmf/common.sh@543 -- # cat 00:24:00.278 15:04:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:00.278 15:04:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:00.278 { 00:24:00.278 "params": { 00:24:00.278 "name": "Nvme$subsystem", 00:24:00.278 "trtype": "$TEST_TRANSPORT", 00:24:00.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.278 "adrfam": "ipv4", 00:24:00.278 "trsvcid": "$NVMF_PORT", 00:24:00.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.278 "hdgst": ${hdgst:-false}, 00:24:00.278 "ddgst": ${ddgst:-false} 00:24:00.278 }, 00:24:00.278 "method": "bdev_nvme_attach_controller" 00:24:00.278 } 00:24:00.278 EOF 00:24:00.278 )") 00:24:00.278 15:04:45 -- nvmf/common.sh@543 -- # cat 00:24:00.278 15:04:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:00.278 15:04:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:00.278 { 00:24:00.278 "params": { 00:24:00.278 "name": "Nvme$subsystem", 00:24:00.278 "trtype": "$TEST_TRANSPORT", 00:24:00.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.278 "adrfam": "ipv4", 00:24:00.278 "trsvcid": "$NVMF_PORT", 00:24:00.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.278 "hdgst": ${hdgst:-false}, 00:24:00.278 "ddgst": ${ddgst:-false} 00:24:00.278 }, 00:24:00.278 "method": "bdev_nvme_attach_controller" 00:24:00.278 } 00:24:00.278 EOF 00:24:00.278 )") 00:24:00.278 15:04:45 -- nvmf/common.sh@543 -- # cat 00:24:00.278 15:04:45 -- nvmf/common.sh@545 -- # jq . 00:24:00.278 15:04:45 -- nvmf/common.sh@546 -- # IFS=, 00:24:00.278 15:04:45 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:00.278 "params": { 00:24:00.278 "name": "Nvme1", 00:24:00.278 "trtype": "tcp", 00:24:00.278 "traddr": "10.0.0.2", 00:24:00.278 "adrfam": "ipv4", 00:24:00.278 "trsvcid": "4420", 00:24:00.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.278 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:00.278 "hdgst": false, 00:24:00.278 "ddgst": false 00:24:00.278 }, 00:24:00.278 "method": "bdev_nvme_attach_controller" 00:24:00.278 },{ 00:24:00.278 "params": { 00:24:00.278 "name": "Nvme2", 00:24:00.278 "trtype": "tcp", 00:24:00.278 "traddr": "10.0.0.2", 00:24:00.278 "adrfam": "ipv4", 00:24:00.278 "trsvcid": "4420", 00:24:00.278 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:00.278 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:00.278 "hdgst": false, 00:24:00.278 "ddgst": false 00:24:00.278 }, 00:24:00.278 "method": "bdev_nvme_attach_controller" 00:24:00.278 },{ 00:24:00.278 "params": { 00:24:00.278 "name": "Nvme3", 00:24:00.278 "trtype": "tcp", 00:24:00.278 "traddr": "10.0.0.2", 00:24:00.278 "adrfam": "ipv4", 00:24:00.278 "trsvcid": "4420", 00:24:00.278 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:00.278 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:00.278 "hdgst": false, 00:24:00.278 "ddgst": false 00:24:00.278 }, 00:24:00.278 "method": "bdev_nvme_attach_controller" 00:24:00.278 },{ 00:24:00.278 "params": { 00:24:00.278 "name": "Nvme4", 00:24:00.278 "trtype": "tcp", 00:24:00.278 "traddr": "10.0.0.2", 00:24:00.278 "adrfam": "ipv4", 00:24:00.278 "trsvcid": "4420", 00:24:00.278 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:00.278 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:00.278 "hdgst": false, 00:24:00.278 "ddgst": false 00:24:00.278 }, 00:24:00.278 "method": "bdev_nvme_attach_controller" 00:24:00.278 },{ 00:24:00.278 "params": { 00:24:00.278 "name": "Nvme5", 00:24:00.278 "trtype": "tcp", 00:24:00.278 "traddr": "10.0.0.2", 00:24:00.278 "adrfam": "ipv4", 00:24:00.278 "trsvcid": "4420", 00:24:00.278 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:00.278 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:00.278 "hdgst": false, 00:24:00.278 "ddgst": false 00:24:00.278 }, 00:24:00.278 "method": "bdev_nvme_attach_controller" 00:24:00.278 },{ 00:24:00.278 "params": { 00:24:00.278 "name": "Nvme6", 00:24:00.278 "trtype": "tcp", 00:24:00.278 "traddr": "10.0.0.2", 00:24:00.278 "adrfam": "ipv4", 00:24:00.278 "trsvcid": "4420", 00:24:00.278 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:00.278 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:00.278 "hdgst": false, 00:24:00.278 "ddgst": false 00:24:00.278 }, 00:24:00.278 "method": "bdev_nvme_attach_controller" 00:24:00.278 },{ 00:24:00.278 "params": { 00:24:00.278 "name": "Nvme7", 00:24:00.278 "trtype": "tcp", 00:24:00.278 "traddr": "10.0.0.2", 00:24:00.278 "adrfam": "ipv4", 00:24:00.278 "trsvcid": "4420", 00:24:00.278 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:00.278 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:00.278 "hdgst": false, 00:24:00.278 "ddgst": false 00:24:00.278 }, 00:24:00.278 "method": "bdev_nvme_attach_controller" 00:24:00.278 },{ 00:24:00.278 "params": { 00:24:00.278 "name": "Nvme8", 00:24:00.278 "trtype": "tcp", 00:24:00.279 "traddr": "10.0.0.2", 00:24:00.279 "adrfam": "ipv4", 00:24:00.279 "trsvcid": "4420", 00:24:00.279 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:00.279 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:00.279 "hdgst": false, 00:24:00.279 "ddgst": false 00:24:00.279 }, 00:24:00.279 "method": "bdev_nvme_attach_controller" 00:24:00.279 },{ 00:24:00.279 "params": { 00:24:00.279 "name": "Nvme9", 00:24:00.279 "trtype": "tcp", 00:24:00.279 "traddr": "10.0.0.2", 00:24:00.279 "adrfam": "ipv4", 00:24:00.279 "trsvcid": "4420", 00:24:00.279 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:00.279 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:00.279 "hdgst": false, 00:24:00.279 "ddgst": false 00:24:00.279 }, 00:24:00.279 "method": "bdev_nvme_attach_controller" 00:24:00.279 },{ 00:24:00.279 "params": { 00:24:00.279 "name": "Nvme10", 00:24:00.279 "trtype": "tcp", 00:24:00.279 "traddr": "10.0.0.2", 00:24:00.279 "adrfam": "ipv4", 00:24:00.279 "trsvcid": "4420", 00:24:00.279 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:00.279 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:00.279 "hdgst": false, 00:24:00.279 "ddgst": false 00:24:00.279 }, 00:24:00.279 "method": "bdev_nvme_attach_controller" 00:24:00.279 }' 00:24:00.279 [2024-04-26 15:04:45.940888] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:24:00.279 [2024-04-26 15:04:45.940956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3842131 ] 00:24:00.279 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.279 [2024-04-26 15:04:45.975498] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:00.550 [2024-04-26 15:04:46.004623] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.550 [2024-04-26 15:04:46.090600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.448 Running I/O for 10 seconds... 00:24:02.448 15:04:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:02.448 15:04:47 -- common/autotest_common.sh@850 -- # return 0 00:24:02.448 15:04:47 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:02.448 15:04:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.448 15:04:47 -- common/autotest_common.sh@10 -- # set +x 00:24:02.448 15:04:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.448 15:04:47 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:02.448 15:04:47 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:02.448 15:04:47 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:02.448 15:04:47 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:02.448 15:04:47 -- target/shutdown.sh@57 -- # local ret=1 00:24:02.448 15:04:47 -- target/shutdown.sh@58 -- # local i 00:24:02.448 15:04:47 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:02.448 15:04:47 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:02.448 15:04:47 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:02.448 15:04:47 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:02.448 15:04:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.448 15:04:47 -- common/autotest_common.sh@10 -- # set +x 00:24:02.448 15:04:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.448 15:04:47 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:02.448 15:04:47 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:02.448 15:04:47 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:02.707 15:04:48 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:02.707 15:04:48 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:02.707 15:04:48 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:02.707 15:04:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.707 15:04:48 -- common/autotest_common.sh@10 -- # set +x 00:24:02.707 15:04:48 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:02.707 15:04:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.707 15:04:48 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:02.707 15:04:48 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:02.707 15:04:48 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:02.985 15:04:48 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:02.985 15:04:48 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:02.985 15:04:48 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:02.985 15:04:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.985 15:04:48 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:02.985 15:04:48 -- common/autotest_common.sh@10 -- # set +x 00:24:02.985 15:04:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.985 15:04:48 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:02.985 15:04:48 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:02.985 15:04:48 -- target/shutdown.sh@64 -- # ret=0 00:24:02.985 15:04:48 -- target/shutdown.sh@65 -- # break 00:24:02.985 15:04:48 -- target/shutdown.sh@69 -- # return 0 00:24:02.985 15:04:48 -- target/shutdown.sh@135 -- # killprocess 3841960 00:24:02.985 15:04:48 -- common/autotest_common.sh@936 -- # '[' -z 3841960 ']' 00:24:02.985 15:04:48 -- common/autotest_common.sh@940 -- # kill -0 3841960 00:24:02.986 15:04:48 -- common/autotest_common.sh@941 -- # uname 00:24:02.986 15:04:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:02.986 15:04:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3841960 00:24:02.986 15:04:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:02.986 15:04:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:02.986 15:04:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3841960' 00:24:02.986 killing process with pid 3841960 00:24:02.986 15:04:48 -- common/autotest_common.sh@955 -- # kill 3841960 00:24:02.986 15:04:48 -- common/autotest_common.sh@960 -- # wait 3841960 00:24:02.986 [2024-04-26 15:04:48.627132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627348] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627438] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627451] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627500] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627624] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627637] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627724] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627837] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.627959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8510 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629352] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629367] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629557] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629582] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.986 [2024-04-26 15:04:48.629646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629724] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629956] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.629998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.630012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.630034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.630048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.630066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.630079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.630092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.630105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.630117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.630130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.630142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.630155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.630168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeae20 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631500] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631589] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631615] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631653] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631739] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631752] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631965] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.631991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.632003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.632016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.987 [2024-04-26 15:04:48.632038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.632051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.632064] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.632076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.632089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.632102] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.632115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.632128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.632140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.632153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.632169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.632182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.632195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.632208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.632220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.632233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.632246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.632258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae89a0 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.633870] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.633905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.633920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.633934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.633947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.633969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.633982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.633995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634276] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634353] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634439] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634451] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634638] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634676] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634688] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.988 [2024-04-26 15:04:48.634726] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e30 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.636398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.989 [2024-04-26 15:04:48.636441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.989 [2024-04-26 15:04:48.636476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.989 [2024-04-26 15:04:48.636491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.989 [2024-04-26 15:04:48.636505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.989 [2024-04-26 15:04:48.636519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.989 [2024-04-26 15:04:48.636533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.989 [2024-04-26 15:04:48.636546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.989 [2024-04-26 15:04:48.636560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129ae00 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.636639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.989 [2024-04-26 15:04:48.636666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.989 [2024-04-26 15:04:48.636682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.989 [2024-04-26 15:04:48.636696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.989 [2024-04-26 15:04:48.636710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.989 [2024-04-26 15:04:48.636723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.989 [2024-04-26 15:04:48.636737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.989 [2024-04-26 15:04:48.636750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.989 [2024-04-26 15:04:48.636773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dce80 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.636820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.989 [2024-04-26 15:04:48.636841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.989 [2024-04-26 15:04:48.636856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.989 [2024-04-26 15:04:48.636869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.989 [2024-04-26 15:04:48.636884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.989 [2024-04-26 15:04:48.636898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.989 [2024-04-26 15:04:48.636912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.989 [2024-04-26 15:04:48.636926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.989 [2024-04-26 15:04:48.636938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a290 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.636984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.989 [2024-04-26 15:04:48.637005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.989 [2024-04-26 15:04:48.637027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.989 [2024-04-26 15:04:48.637043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.989 [2024-04-26 15:04:48.637057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.989 [2024-04-26 15:04:48.637071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.989 [2024-04-26 15:04:48.637085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.989 [2024-04-26 15:04:48.637098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.989 [2024-04-26 15:04:48.637115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccbc00 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637601] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637630] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637654] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.637987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638084] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638255] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638318] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.989 [2024-04-26 15:04:48.638331] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.638343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.638356] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.638368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.638381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.638393] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.638406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.638418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae92c0 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.639205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.990 [2024-04-26 15:04:48.639230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.990 [2024-04-26 15:04:48.639258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.990 [2024-04-26 15:04:48.639274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.990 [2024-04-26 15:04:48.639291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.990 [2024-04-26 15:04:48.639305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.990 [2024-04-26 15:04:48.639320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.990 [2024-04-26 15:04:48.639334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.990 [2024-04-26 15:04:48.639349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.990 [2024-04-26 15:04:48.639363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.990 [2024-04-26 15:04:48.639378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.990 [2024-04-26 15:04:48.639392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.990 [2024-04-26 15:04:48.639407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.990 [2024-04-26 15:04:48.639421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.990 [2024-04-26 15:04:48.639424] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.639437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.990 [2024-04-26 15:04:48.639451] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.639456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.990 [2024-04-26 15:04:48.639465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.639473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.990 [2024-04-26 15:04:48.639486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with [2024-04-26 15:04:48.639487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:02.990 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.990 [2024-04-26 15:04:48.639502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.639505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.990 [2024-04-26 15:04:48.639516] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.639520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.990 [2024-04-26 15:04:48.639528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.639536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.990 [2024-04-26 15:04:48.639541] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.639551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.990 [2024-04-26 15:04:48.639554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.639567] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with [2024-04-26 15:04:48.639567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128the state(5) to be set 00:24:02.990 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.990 [2024-04-26 15:04:48.639582] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with [2024-04-26 15:04:48.639583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:02.990 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.990 [2024-04-26 15:04:48.639596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.639600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.990 [2024-04-26 15:04:48.639609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.639615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.990 [2024-04-26 15:04:48.639621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.639630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.990 [2024-04-26 15:04:48.639633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.639645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.990 [2024-04-26 15:04:48.639651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.639661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.990 [2024-04-26 15:04:48.639664] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.639675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 15:04:48.639677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.990 the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.639690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.639693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.990 [2024-04-26 15:04:48.639703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.639707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.990 [2024-04-26 15:04:48.639715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.990 [2024-04-26 15:04:48.639723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.990 [2024-04-26 15:04:48.639728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.639738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.639741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.639753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with [2024-04-26 15:04:48.639754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:12the state(5) to be set 00:24:02.991 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 [2024-04-26 15:04:48.639767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.639770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.639780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.639787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 [2024-04-26 15:04:48.639793] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.639801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.639805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.639818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with [2024-04-26 15:04:48.639817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:12the state(5) to be set 00:24:02.991 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 [2024-04-26 15:04:48.639832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with [2024-04-26 15:04:48.639834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:02.991 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.639851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.639853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 [2024-04-26 15:04:48.639865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.639868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.639878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.639884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 [2024-04-26 15:04:48.639892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.639899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.639906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.639915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 [2024-04-26 15:04:48.639920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.639929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.639933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.639945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:12[2024-04-26 15:04:48.639947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.639961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 15:04:48.639962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.639976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.639979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 [2024-04-26 15:04:48.639989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.639993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.640002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.640009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 [2024-04-26 15:04:48.640016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.640031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.640037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.640061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with [2024-04-26 15:04:48.640061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:12the state(5) to be set 00:24:02.991 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 [2024-04-26 15:04:48.640076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.640078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.640090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.640095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 [2024-04-26 15:04:48.640103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.640110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.640116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.640125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 [2024-04-26 15:04:48.640129] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.640139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.640142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.640155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:12[2024-04-26 15:04:48.640156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.640170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 15:04:48.640171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.640186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.640188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 [2024-04-26 15:04:48.640199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.640203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.640212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.640218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 [2024-04-26 15:04:48.640225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.640232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.640239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.640252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with [2024-04-26 15:04:48.640252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:12the state(5) to be set 00:24:02.991 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 [2024-04-26 15:04:48.640270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with [2024-04-26 15:04:48.640270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:02.991 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.640285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.640288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 [2024-04-26 15:04:48.640298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.640303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.640311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9750 is same with the state(5) to be set 00:24:02.991 [2024-04-26 15:04:48.640318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 [2024-04-26 15:04:48.640333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.640348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 [2024-04-26 15:04:48.640361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.640377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 [2024-04-26 15:04:48.640392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.640408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.991 [2024-04-26 15:04:48.640422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.991 [2024-04-26 15:04:48.640438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.640452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.640468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.640482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.640498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.640512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.640528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.640545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.640562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.640577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.640592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.640606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.640622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.640635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.640651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.640665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.640680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.640694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.640710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.640724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.640739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.640754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.640769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.640783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.640798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.640813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.640828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.640842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.640857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.640871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.640886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.640900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.640919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.640933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.640948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.640962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.640977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.640990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.641005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.641024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.641042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.641057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.641072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.641086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.641100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.641114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.641129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.641142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.641157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.641171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.641186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.992 [2024-04-26 15:04:48.641199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.992 [2024-04-26 15:04:48.641321] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x122bfb0 was disconnected and freed. reset controller. 00:24:02.992 [2024-04-26 15:04:48.642218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642344] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642382] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642433] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642445] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642458] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642483] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642496] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642508] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642572] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642598] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.992 [2024-04-26 15:04:48.642693] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642769] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642794] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.642998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.643011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.643034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.643047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9be0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.645672] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.993 [2024-04-26 15:04:48.645721] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xccbc00 (9): Bad file descriptor 00:24:02.993 [2024-04-26 15:04:48.646759] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:02.993 [2024-04-26 15:04:48.646859] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129ae00 (9): Bad file descriptor 00:24:02.993 [2024-04-26 15:04:48.646925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.993 [2024-04-26 15:04:48.646947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.993 [2024-04-26 15:04:48.646963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.993 [2024-04-26 15:04:48.646977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.993 [2024-04-26 15:04:48.646990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.993 [2024-04-26 15:04:48.647003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.993 [2024-04-26 15:04:48.647017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.993 [2024-04-26 15:04:48.647039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.993 [2024-04-26 15:04:48.647053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ad020 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.647100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.993 [2024-04-26 15:04:48.647120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.993 [2024-04-26 15:04:48.647134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.993 [2024-04-26 15:04:48.647148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.993 [2024-04-26 15:04:48.647162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.993 [2024-04-26 15:04:48.647176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.993 [2024-04-26 15:04:48.647189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.993 [2024-04-26 15:04:48.647202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.993 [2024-04-26 15:04:48.647217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d5a0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.647244] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10dce80 (9): Bad file descriptor 00:24:02.993 [2024-04-26 15:04:48.647281] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110a290 (9): Bad file descriptor 00:24:02.993 [2024-04-26 15:04:48.647321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea070 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.647333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.993 [2024-04-26 15:04:48.647355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.993 [2024-04-26 15:04:48.647370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.993 [2024-04-26 15:04:48.647384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.993 [2024-04-26 15:04:48.647398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.993 [2024-04-26 15:04:48.647411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.993 [2024-04-26 15:04:48.647425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.993 [2024-04-26 15:04:48.647439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.993 [2024-04-26 15:04:48.647451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102fb0 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.647806] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:02.993 [2024-04-26 15:04:48.647893] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:02.993 [2024-04-26 15:04:48.648067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648222] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1[2024-04-26 15:04:48.648265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.993 the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.993 [2024-04-26 15:04:48.648294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with [2024-04-26 15:04:48.648307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:1the state(5) to be set 00:24:02.993 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.993 [2024-04-26 15:04:48.648321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.993 [2024-04-26 15:04:48.648334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.993 [2024-04-26 15:04:48.648346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 15:04:48.648358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.993 the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.993 [2024-04-26 15:04:48.648374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.993 [2024-04-26 15:04:48.648384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.994 [2024-04-26 15:04:48.648396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.994 [2024-04-26 15:04:48.648408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.994 [2024-04-26 15:04:48.648421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with [2024-04-26 15:04:48.648435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:1the state(5) to be set 00:24:02.994 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.994 [2024-04-26 15:04:48.648448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with [2024-04-26 15:04:48.648451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:02.994 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.994 [2024-04-26 15:04:48.648466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.994 [2024-04-26 15:04:48.648479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.994 [2024-04-26 15:04:48.648491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.994 [2024-04-26 15:04:48.648504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.994 [2024-04-26 15:04:48.648517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1[2024-04-26 15:04:48.648530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.994 the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 15:04:48.648545] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.994 the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.994 [2024-04-26 15:04:48.648573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.994 [2024-04-26 15:04:48.648585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.994 [2024-04-26 15:04:48.648598] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.994 [2024-04-26 15:04:48.648611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with [2024-04-26 15:04:48.648623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1the state(5) to be set 00:24:02.994 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.994 [2024-04-26 15:04:48.648637] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with [2024-04-26 15:04:48.648639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:02.994 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.994 [2024-04-26 15:04:48.648654] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.994 [2024-04-26 15:04:48.648667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.994 [2024-04-26 15:04:48.648680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.994 [2024-04-26 15:04:48.648692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.994 [2024-04-26 15:04:48.648705] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with [2024-04-26 15:04:48.648718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1the state(5) to be set 00:24:02.994 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.994 [2024-04-26 15:04:48.648733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.994 [2024-04-26 15:04:48.648746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.994 [2024-04-26 15:04:48.648759] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.994 [2024-04-26 15:04:48.648771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1[2024-04-26 15:04:48.648784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.994 the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with [2024-04-26 15:04:48.648798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:02.994 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.994 [2024-04-26 15:04:48.648812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.994 [2024-04-26 15:04:48.648825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.994 [2024-04-26 15:04:48.648840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.994 [2024-04-26 15:04:48.648853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.994 [2024-04-26 15:04:48.648866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with [2024-04-26 15:04:48.648878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1the state(5) to be set 00:24:02.994 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.994 [2024-04-26 15:04:48.648893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with [2024-04-26 15:04:48.648894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:02.994 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.994 [2024-04-26 15:04:48.648907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea500 is same with the state(5) to be set 00:24:02.994 [2024-04-26 15:04:48.648911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.994 [2024-04-26 15:04:48.648926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.994 [2024-04-26 15:04:48.648942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.994 [2024-04-26 15:04:48.648956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.648971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.648985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:12[2024-04-26 15:04:48.649608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649637] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:12[2024-04-26 15:04:48.649651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 15:04:48.649665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649681] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649693] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649719] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:12[2024-04-26 15:04:48.649746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with [2024-04-26 15:04:48.649762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:02.995 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649813] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:12[2024-04-26 15:04:48.649852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 15:04:48.649868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.995 [2024-04-26 15:04:48.649922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.995 [2024-04-26 15:04:48.649935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.995 [2024-04-26 15:04:48.649944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.996 [2024-04-26 15:04:48.649948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.649958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 15:04:48.649961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.996 the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.649977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.649979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.996 [2024-04-26 15:04:48.649990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.649994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.996 [2024-04-26 15:04:48.650004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.996 [2024-04-26 15:04:48.650024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.996 [2024-04-26 15:04:48.650039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.996 [2024-04-26 15:04:48.650053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.996 [2024-04-26 15:04:48.650066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with [2024-04-26 15:04:48.650079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:12the state(5) to be set 00:24:02.996 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.996 [2024-04-26 15:04:48.650095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with [2024-04-26 15:04:48.650095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:02.996 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.996 [2024-04-26 15:04:48.650110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.996 [2024-04-26 15:04:48.650123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.996 [2024-04-26 15:04:48.650136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.996 [2024-04-26 15:04:48.650149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 15:04:48.650162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.996 the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.996 [2024-04-26 15:04:48.650192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.996 [2024-04-26 15:04:48.650205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.996 [2024-04-26 15:04:48.650218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.996 [2024-04-26 15:04:48.650231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.996 [2024-04-26 15:04:48.650257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.996 [2024-04-26 15:04:48.650271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.996 [2024-04-26 15:04:48.650284] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.996 [2024-04-26 15:04:48.650296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650322] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650359] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650378] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10d7870 was disconnected and freed. reset controller. 00:24:02.996 [2024-04-26 15:04:48.650392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea990 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.650872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.996 [2024-04-26 15:04:48.651067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.996 [2024-04-26 15:04:48.651094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xccbc00 with addr=10.0.0.2, port=4420 00:24:02.996 [2024-04-26 15:04:48.651110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccbc00 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.651228] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:02.996 [2024-04-26 15:04:48.651314] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:02.996 [2024-04-26 15:04:48.652649] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:02.996 [2024-04-26 15:04:48.652712] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1247780 (9): Bad file descriptor 00:24:02.996 [2024-04-26 15:04:48.652740] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xccbc00 (9): Bad file descriptor 00:24:02.996 [2024-04-26 15:04:48.652864] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:02.996 [2024-04-26 15:04:48.653070] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.996 [2024-04-26 15:04:48.653092] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.996 [2024-04-26 15:04:48.653109] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.996 [2024-04-26 15:04:48.653560] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.996 [2024-04-26 15:04:48.653843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.996 [2024-04-26 15:04:48.654024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.996 [2024-04-26 15:04:48.654049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1247780 with addr=10.0.0.2, port=4420 00:24:02.996 [2024-04-26 15:04:48.654064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1247780 is same with the state(5) to be set 00:24:02.996 [2024-04-26 15:04:48.654160] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:02.996 [2024-04-26 15:04:48.654275] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1247780 (9): Bad file descriptor 00:24:02.996 [2024-04-26 15:04:48.654403] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:02.996 [2024-04-26 15:04:48.654437] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:02.996 [2024-04-26 15:04:48.654455] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:02.996 [2024-04-26 15:04:48.654478] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:02.996 [2024-04-26 15:04:48.654552] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.996 [2024-04-26 15:04:48.656863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.996 [2024-04-26 15:04:48.656890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.656906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.997 [2024-04-26 15:04:48.656920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.656934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.997 [2024-04-26 15:04:48.656947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.656962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.997 [2024-04-26 15:04:48.656975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.656988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12470c0 is same with the state(5) to be set 00:24:02.997 [2024-04-26 15:04:48.657048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.997 [2024-04-26 15:04:48.657070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.997 [2024-04-26 15:04:48.657099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.997 [2024-04-26 15:04:48.657126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.997 [2024-04-26 15:04:48.657153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657165] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119cd60 is same with the state(5) to be set 00:24:02.997 [2024-04-26 15:04:48.657201] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ad020 (9): Bad file descriptor 00:24:02.997 [2024-04-26 15:04:48.657233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119d5a0 (9): Bad file descriptor 00:24:02.997 [2024-04-26 15:04:48.657274] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1102fb0 (9): Bad file descriptor 00:24:02.997 [2024-04-26 15:04:48.657430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.657453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.657497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.657530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.657559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.657588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.657618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.657647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.657677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.657707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.657736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.657766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.657796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.657825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.657855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.657889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.657919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.657949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.657978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.657994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.658009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.658039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.658056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.658071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.658085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.658100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.658114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.658129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.658143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.658158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.658172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.658187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.658201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.658216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.658229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.658244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.658261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.658278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.658292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.658307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.658320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.658336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.658350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.658365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.658379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.658394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.658408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.658423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.658437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.997 [2024-04-26 15:04:48.658452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.997 [2024-04-26 15:04:48.658466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.658481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.658495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.658511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.658526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.658541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.658555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.658570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.658584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.658599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.658613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.658632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.658647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.658663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.658677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.658692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.658705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.658720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.658733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.658749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.658763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.658778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.658792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.658807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.658821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.658837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.658850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.658866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.658879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.658897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.658912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.658927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.658941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.658957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.658971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.658987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.659005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.659028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.659045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.659061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.659075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.659091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.659105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.659120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.659134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.659150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.659164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.659180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.659194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.659209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.659223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.659238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.659252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.659267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.659281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.659297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.659311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.659327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.659341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.659357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.659371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.659386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122d340 is same with the state(5) to be set 00:24:02.998 [2024-04-26 15:04:48.660664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.660687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.660708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.998 [2024-04-26 15:04:48.660723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.998 [2024-04-26 15:04:48.660739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.660754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.660770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.660785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.660801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.660815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.660831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.660845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.660862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.660876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.660892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.660906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.660922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.660936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.660952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.660966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.660982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.660996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.661982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.999 [2024-04-26 15:04:48.661997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.999 [2024-04-26 15:04:48.662011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.662617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.662631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122e7b0 is same with the state(5) to be set 00:24:03.000 [2024-04-26 15:04:48.663929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.663953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.663974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.663990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.664006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.664027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.664045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.664060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.664101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.664118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.664134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.664151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.664166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.664181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.664197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.664211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.664226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.664241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.664256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.664270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.664286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.664305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.664322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.664336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.664351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.664367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.664383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.664397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.664413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.664427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.664443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.664457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.664473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.664487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.664502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.664516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.664532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.664545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.664561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.664575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.000 [2024-04-26 15:04:48.664590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.000 [2024-04-26 15:04:48.664604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.664620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.664633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.664649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.664663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.664682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.664697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.664713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.664727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.664743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.664757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.664773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.664787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.664803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.664817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.664833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.664847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.664863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.664876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.664892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.664906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.664922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.664936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.664952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.664966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.664982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.664997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.001 [2024-04-26 15:04:48.665860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.001 [2024-04-26 15:04:48.665873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.665888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.665902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.665916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223950 is same with the state(5) to be set 00:24:03.002 [2024-04-26 15:04:48.667661] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:03.002 [2024-04-26 15:04:48.667696] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:03.002 [2024-04-26 15:04:48.667714] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:03.002 [2024-04-26 15:04:48.667730] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:03.002 [2024-04-26 15:04:48.667862] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12470c0 (9): Bad file descriptor 00:24:03.002 [2024-04-26 15:04:48.667905] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119cd60 (9): Bad file descriptor 00:24:03.002 [2024-04-26 15:04:48.668397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.002 [2024-04-26 15:04:48.668616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.002 [2024-04-26 15:04:48.668641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xccbc00 with addr=10.0.0.2, port=4420 00:24:03.002 [2024-04-26 15:04:48.668657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccbc00 is same with the state(5) to be set 00:24:03.002 [2024-04-26 15:04:48.668790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.002 [2024-04-26 15:04:48.669003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.002 [2024-04-26 15:04:48.669035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10dce80 with addr=10.0.0.2, port=4420 00:24:03.002 [2024-04-26 15:04:48.669053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dce80 is same with the state(5) to be set 00:24:03.002 [2024-04-26 15:04:48.669243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.002 [2024-04-26 15:04:48.669434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.002 [2024-04-26 15:04:48.669457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110a290 with addr=10.0.0.2, port=4420 00:24:03.002 [2024-04-26 15:04:48.669472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a290 is same with the state(5) to be set 00:24:03.002 [2024-04-26 15:04:48.669692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.002 [2024-04-26 15:04:48.669879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.002 [2024-04-26 15:04:48.669904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129ae00 with addr=10.0.0.2, port=4420 00:24:03.002 [2024-04-26 15:04:48.669920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129ae00 is same with the state(5) to be set 00:24:03.002 [2024-04-26 15:04:48.670540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.670565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.670591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.670606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.670623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.670637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.670652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.670668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.670683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.670697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.670712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.670726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.670741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.670755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.670771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.670784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.670799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.670813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.670828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.670842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.670857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.670870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.670885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.670899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.670914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.670934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.670950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.670964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.670979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.670993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.671009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.671031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.671049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.671063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.671079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.671093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.671108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.671121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.671137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.671151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.671166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.671180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.671195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.671209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.671224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.671238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.671254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.671268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.671284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.671298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.671317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.671332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.671347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.671361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.671376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.671390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.671405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.671419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.002 [2024-04-26 15:04:48.671434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.002 [2024-04-26 15:04:48.671448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.671463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.671477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.671493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.671507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.671522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.671536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.671553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.671567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.671583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.671596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.671613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.671627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.671642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.671656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.671671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.671689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.671705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.671720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.671735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.671748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.671764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.671778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.671793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.671810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.671826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.671841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.671856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.671870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.671886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.671900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.671916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.671930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.671947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.671961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.671977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.671991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.672007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.672026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.672045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.672060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.672080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.672095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.672112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.672126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.672142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.672156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.672172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.672187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.672202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.672217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.672233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.672247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.672262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.672278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.672294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.672308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.672323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.672337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.672353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.672368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.672383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.672396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.672413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.672427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.672442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.672461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.672477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.672492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.672506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118c270 is same with the state(5) to be set 00:24:03.003 [2024-04-26 15:04:48.673780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.673803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.673824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.003 [2024-04-26 15:04:48.673840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.003 [2024-04-26 15:04:48.673856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.673870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.673887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.673901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.673917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.673931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.673946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.673961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.673977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.673991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.674980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.674996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.675010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.675030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.675045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.675061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.675075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.675090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.675105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.675120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.004 [2024-04-26 15:04:48.675134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.004 [2024-04-26 15:04:48.675150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.675744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.675759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d6e0 is same with the state(5) to be set 00:24:03.005 [2024-04-26 15:04:48.676997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.005 [2024-04-26 15:04:48.677632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.005 [2024-04-26 15:04:48.677646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.677662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.677680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.677696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.677711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.677726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.677740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.677756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.677770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.677786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.677800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.677815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.677830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.677845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.677859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.677874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.677889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.677905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.677919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.677935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.677949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.677965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.677978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.677994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.006 [2024-04-26 15:04:48.678935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.006 [2024-04-26 15:04:48.678949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d63c0 is same with the state(5) to be set 00:24:03.006 [2024-04-26 15:04:48.680486] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:03.007 [2024-04-26 15:04:48.680518] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:03.007 [2024-04-26 15:04:48.680538] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:03.007 [2024-04-26 15:04:48.680556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:03.007 [2024-04-26 15:04:48.680626] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xccbc00 (9): Bad file descriptor 00:24:03.007 [2024-04-26 15:04:48.680650] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10dce80 (9): Bad file descriptor 00:24:03.007 [2024-04-26 15:04:48.680669] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110a290 (9): Bad file descriptor 00:24:03.007 [2024-04-26 15:04:48.680687] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129ae00 (9): Bad file descriptor 00:24:03.007 [2024-04-26 15:04:48.680780] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:03.007 [2024-04-26 15:04:48.680805] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:03.007 [2024-04-26 15:04:48.680826] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:03.007 [2024-04-26 15:04:48.680845] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:03.007 [2024-04-26 15:04:48.681240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.007 [2024-04-26 15:04:48.681500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.007 [2024-04-26 15:04:48.681526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1247780 with addr=10.0.0.2, port=4420 00:24:03.007 [2024-04-26 15:04:48.681542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1247780 is same with the state(5) to be set 00:24:03.007 [2024-04-26 15:04:48.681737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.007 [2024-04-26 15:04:48.681987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.007 [2024-04-26 15:04:48.682012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1102fb0 with addr=10.0.0.2, port=4420 00:24:03.007 [2024-04-26 15:04:48.682047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102fb0 is same with the state(5) to be set 00:24:03.007 [2024-04-26 15:04:48.682208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.007 [2024-04-26 15:04:48.682407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.007 [2024-04-26 15:04:48.682431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119d5a0 with addr=10.0.0.2, port=4420 00:24:03.007 [2024-04-26 15:04:48.682447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d5a0 is same with the state(5) to be set 00:24:03.007 [2024-04-26 15:04:48.682631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.007 [2024-04-26 15:04:48.682877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.007 [2024-04-26 15:04:48.682903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ad020 with addr=10.0.0.2, port=4420 00:24:03.007 [2024-04-26 15:04:48.682919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ad020 is same with the state(5) to be set 00:24:03.007 [2024-04-26 15:04:48.682934] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:03.007 [2024-04-26 15:04:48.682947] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:03.007 [2024-04-26 15:04:48.682962] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:03.007 [2024-04-26 15:04:48.682984] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:03.007 [2024-04-26 15:04:48.682997] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:03.007 [2024-04-26 15:04:48.683010] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:03.007 [2024-04-26 15:04:48.683037] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:03.007 [2024-04-26 15:04:48.683052] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:03.007 [2024-04-26 15:04:48.683065] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:03.007 [2024-04-26 15:04:48.683081] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:03.007 [2024-04-26 15:04:48.683095] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:03.007 [2024-04-26 15:04:48.683107] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:03.007 [2024-04-26 15:04:48.683965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.007 [2024-04-26 15:04:48.683990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.007 [2024-04-26 15:04:48.684013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.007 [2024-04-26 15:04:48.684037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.007 [2024-04-26 15:04:48.684055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.007 [2024-04-26 15:04:48.684069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.007 [2024-04-26 15:04:48.684090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.007 [2024-04-26 15:04:48.684105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.007 [2024-04-26 15:04:48.684121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.007 [2024-04-26 15:04:48.684135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.007 [2024-04-26 15:04:48.684150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.007 [2024-04-26 15:04:48.684164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.007 [2024-04-26 15:04:48.684180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.007 [2024-04-26 15:04:48.684194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.007 [2024-04-26 15:04:48.684209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.007 [2024-04-26 15:04:48.684223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.007 [2024-04-26 15:04:48.684238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.007 [2024-04-26 15:04:48.684252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.007 [2024-04-26 15:04:48.684267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.007 [2024-04-26 15:04:48.684280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.007 [2024-04-26 15:04:48.684296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.007 [2024-04-26 15:04:48.684310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.007 [2024-04-26 15:04:48.684325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.007 [2024-04-26 15:04:48.684338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.007 [2024-04-26 15:04:48.684353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.684979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.684993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.008 [2024-04-26 15:04:48.685607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.008 [2024-04-26 15:04:48.685623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.685637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.685653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.685666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.685682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.685696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.685711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.685725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.685740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.685754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.685770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.685783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.685799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.685813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.685828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.685841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.685857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.685870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.685885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.685899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.685913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d8d20 is same with the state(5) to be set 00:24:03.009 [2024-04-26 15:04:48.687191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.687980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.687996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.688014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.688039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.688055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.688071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.688085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.688101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.688116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.688131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.688145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.688161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.688175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.688192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.688206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.688222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.688236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.688252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.688266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.688282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.009 [2024-04-26 15:04:48.688296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.009 [2024-04-26 15:04:48.688312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.688983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.688998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.689013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.689035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.689052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.689067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.689083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.689097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.689114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.689129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.689145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.010 [2024-04-26 15:04:48.689160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.010 [2024-04-26 15:04:48.689177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10da100 is same with the state(5) to be set 00:24:03.010 [2024-04-26 15:04:48.690823] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:03.010 [2024-04-26 15:04:48.690849] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:03.010 [2024-04-26 15:04:48.690862] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:03.010 [2024-04-26 15:04:48.690875] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:03.010 [2024-04-26 15:04:48.690891] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:03.270 task offset: 32256 on job bdev=Nvme1n1 fails 00:24:03.270 00:24:03.270 Latency(us) 00:24:03.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.270 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.270 Job: Nvme1n1 ended in about 0.92 seconds with error 00:24:03.270 Verification LBA range: start 0x0 length 0x400 00:24:03.270 Nvme1n1 : 0.92 208.21 13.01 69.40 0.00 227959.28 5849.69 267192.70 00:24:03.270 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.270 Job: Nvme2n1 ended in about 0.94 seconds with error 00:24:03.270 Verification LBA range: start 0x0 length 0x400 00:24:03.270 Nvme2n1 : 0.94 136.37 8.52 68.19 0.00 303394.01 21262.79 268746.15 00:24:03.270 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.270 Job: Nvme3n1 ended in about 0.94 seconds with error 00:24:03.270 Verification LBA range: start 0x0 length 0x400 00:24:03.270 Nvme3n1 : 0.94 203.86 12.74 67.95 0.00 223720.87 20388.98 250104.79 00:24:03.270 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.270 Job: Nvme4n1 ended in about 0.95 seconds with error 00:24:03.270 Verification LBA range: start 0x0 length 0x400 00:24:03.270 Nvme4n1 : 0.95 134.50 8.41 67.25 0.00 295577.98 36505.98 270299.59 00:24:03.270 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.270 Job: Nvme5n1 ended in about 0.95 seconds with error 00:24:03.270 Verification LBA range: start 0x0 length 0x400 00:24:03.270 Nvme5n1 : 0.95 134.04 8.38 67.02 0.00 290457.22 21456.97 270299.59 00:24:03.270 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.270 Job: Nvme6n1 ended in about 0.96 seconds with error 00:24:03.270 Verification LBA range: start 0x0 length 0x400 00:24:03.270 Nvme6n1 : 0.96 133.60 8.35 66.80 0.00 285471.98 22719.15 298261.62 00:24:03.270 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.270 Job: Nvme7n1 ended in about 0.93 seconds with error 00:24:03.270 Verification LBA range: start 0x0 length 0x400 00:24:03.270 Nvme7n1 : 0.93 206.32 12.90 68.77 0.00 202592.38 5121.52 260978.92 00:24:03.270 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.270 Job: Nvme8n1 ended in about 0.97 seconds with error 00:24:03.270 Verification LBA range: start 0x0 length 0x400 00:24:03.270 Nvme8n1 : 0.97 198.95 12.43 66.32 0.00 206766.08 17476.27 262532.36 00:24:03.270 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.270 Job: Nvme9n1 ended in about 0.97 seconds with error 00:24:03.270 Verification LBA range: start 0x0 length 0x400 00:24:03.270 Nvme9n1 : 0.97 132.19 8.26 66.10 0.00 270926.13 22136.60 274959.93 00:24:03.270 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.270 Job: Nvme10n1 ended in about 0.95 seconds with error 00:24:03.270 Verification LBA range: start 0x0 length 0x400 00:24:03.270 Nvme10n1 : 0.95 135.43 8.46 67.72 0.00 257292.01 20680.25 267192.70 00:24:03.270 =================================================================================================================== 00:24:03.270 Total : 1623.48 101.47 675.51 0.00 251573.89 5121.52 298261.62 00:24:03.270 [2024-04-26 15:04:48.716384] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:03.270 [2024-04-26 15:04:48.716467] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:03.270 [2024-04-26 15:04:48.716557] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1247780 (9): Bad file descriptor 00:24:03.270 [2024-04-26 15:04:48.716586] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1102fb0 (9): Bad file descriptor 00:24:03.270 [2024-04-26 15:04:48.716605] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119d5a0 (9): Bad file descriptor 00:24:03.270 [2024-04-26 15:04:48.716623] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ad020 (9): Bad file descriptor 00:24:03.270 [2024-04-26 15:04:48.716719] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:03.270 [2024-04-26 15:04:48.716745] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:03.270 [2024-04-26 15:04:48.716763] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:03.270 [2024-04-26 15:04:48.716783] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:03.270 [2024-04-26 15:04:48.717219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.270 [2024-04-26 15:04:48.717398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.270 [2024-04-26 15:04:48.717425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119cd60 with addr=10.0.0.2, port=4420 00:24:03.270 [2024-04-26 15:04:48.717445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119cd60 is same with the state(5) to be set 00:24:03.270 [2024-04-26 15:04:48.717580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.270 [2024-04-26 15:04:48.717745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.270 [2024-04-26 15:04:48.717770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12470c0 with addr=10.0.0.2, port=4420 00:24:03.270 [2024-04-26 15:04:48.717785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12470c0 is same with the state(5) to be set 00:24:03.270 [2024-04-26 15:04:48.717800] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:03.270 [2024-04-26 15:04:48.717814] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:03.270 [2024-04-26 15:04:48.717830] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:03.270 [2024-04-26 15:04:48.717850] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:03.270 [2024-04-26 15:04:48.717864] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:03.270 [2024-04-26 15:04:48.717877] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:03.270 [2024-04-26 15:04:48.717893] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:03.271 [2024-04-26 15:04:48.717907] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:03.271 [2024-04-26 15:04:48.717920] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:03.271 [2024-04-26 15:04:48.717936] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:03.271 [2024-04-26 15:04:48.717949] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:03.271 [2024-04-26 15:04:48.717962] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:03.271 [2024-04-26 15:04:48.718006] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:03.271 [2024-04-26 15:04:48.718040] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:03.271 [2024-04-26 15:04:48.718059] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:03.271 [2024-04-26 15:04:48.718077] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:03.271 [2024-04-26 15:04:48.718094] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:03.271 [2024-04-26 15:04:48.718112] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:03.271 [2024-04-26 15:04:48.718129] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:03.271 [2024-04-26 15:04:48.718147] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:03.271 [2024-04-26 15:04:48.718747] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:03.271 [2024-04-26 15:04:48.718774] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:03.271 [2024-04-26 15:04:48.718790] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:03.271 [2024-04-26 15:04:48.718806] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:03.271 [2024-04-26 15:04:48.718846] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:03.271 [2024-04-26 15:04:48.718863] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:03.271 [2024-04-26 15:04:48.718875] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:03.271 [2024-04-26 15:04:48.718917] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119cd60 (9): Bad file descriptor 00:24:03.271 [2024-04-26 15:04:48.718940] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12470c0 (9): Bad file descriptor 00:24:03.271 [2024-04-26 15:04:48.719268] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:03.271 [2024-04-26 15:04:48.719474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.271 [2024-04-26 15:04:48.719639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.271 [2024-04-26 15:04:48.719665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129ae00 with addr=10.0.0.2, port=4420 00:24:03.271 [2024-04-26 15:04:48.719681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129ae00 is same with the state(5) to be set 00:24:03.271 [2024-04-26 15:04:48.719841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.271 [2024-04-26 15:04:48.719975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.271 [2024-04-26 15:04:48.719999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110a290 with addr=10.0.0.2, port=4420 00:24:03.271 [2024-04-26 15:04:48.720015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a290 is same with the state(5) to be set 00:24:03.271 [2024-04-26 15:04:48.720181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.271 [2024-04-26 15:04:48.720321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.271 [2024-04-26 15:04:48.720346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10dce80 with addr=10.0.0.2, port=4420 00:24:03.271 [2024-04-26 15:04:48.720362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dce80 is same with the state(5) to be set 00:24:03.271 [2024-04-26 15:04:48.720519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.271 [2024-04-26 15:04:48.720637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.271 [2024-04-26 15:04:48.720663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xccbc00 with addr=10.0.0.2, port=4420 00:24:03.271 [2024-04-26 15:04:48.720678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccbc00 is same with the state(5) to be set 00:24:03.271 [2024-04-26 15:04:48.720693] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:03.271 [2024-04-26 15:04:48.720705] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:03.271 [2024-04-26 15:04:48.720718] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:03.271 [2024-04-26 15:04:48.720736] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:03.271 [2024-04-26 15:04:48.720750] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:03.271 [2024-04-26 15:04:48.720762] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:03.271 [2024-04-26 15:04:48.720818] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:03.271 [2024-04-26 15:04:48.720837] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:03.271 [2024-04-26 15:04:48.720854] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129ae00 (9): Bad file descriptor 00:24:03.271 [2024-04-26 15:04:48.720873] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110a290 (9): Bad file descriptor 00:24:03.271 [2024-04-26 15:04:48.720890] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10dce80 (9): Bad file descriptor 00:24:03.271 [2024-04-26 15:04:48.720907] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xccbc00 (9): Bad file descriptor 00:24:03.271 [2024-04-26 15:04:48.720945] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:03.271 [2024-04-26 15:04:48.720962] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:03.271 [2024-04-26 15:04:48.720975] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:03.271 [2024-04-26 15:04:48.720991] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:03.271 [2024-04-26 15:04:48.721005] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:03.271 [2024-04-26 15:04:48.721017] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:03.271 [2024-04-26 15:04:48.721044] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:03.271 [2024-04-26 15:04:48.721058] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:03.271 [2024-04-26 15:04:48.721070] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:03.271 [2024-04-26 15:04:48.721085] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:03.271 [2024-04-26 15:04:48.721097] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:03.271 [2024-04-26 15:04:48.721109] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:03.271 [2024-04-26 15:04:48.721159] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:03.271 [2024-04-26 15:04:48.721178] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:03.271 [2024-04-26 15:04:48.721190] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:03.271 [2024-04-26 15:04:48.721202] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:03.530 15:04:49 -- target/shutdown.sh@136 -- # nvmfpid= 00:24:03.530 15:04:49 -- target/shutdown.sh@139 -- # sleep 1 00:24:04.469 15:04:50 -- target/shutdown.sh@142 -- # kill -9 3842131 00:24:04.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3842131) - No such process 00:24:04.469 15:04:50 -- target/shutdown.sh@142 -- # true 00:24:04.469 15:04:50 -- target/shutdown.sh@144 -- # stoptarget 00:24:04.469 15:04:50 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:04.469 15:04:50 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:04.469 15:04:50 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:04.469 15:04:50 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:04.469 15:04:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:04.469 15:04:50 -- nvmf/common.sh@117 -- # sync 00:24:04.469 15:04:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:04.469 15:04:50 -- nvmf/common.sh@120 -- # set +e 00:24:04.469 15:04:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:04.469 15:04:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:04.469 rmmod nvme_tcp 00:24:04.469 rmmod nvme_fabrics 00:24:04.469 rmmod nvme_keyring 00:24:04.728 15:04:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:04.728 15:04:50 -- nvmf/common.sh@124 -- # set -e 00:24:04.728 15:04:50 -- nvmf/common.sh@125 -- # return 0 00:24:04.728 15:04:50 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:24:04.728 15:04:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:04.728 15:04:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:04.728 15:04:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:04.728 15:04:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:04.728 15:04:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:04.728 15:04:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.728 15:04:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.728 15:04:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.632 15:04:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:06.632 00:24:06.632 real 0m7.467s 00:24:06.632 user 0m18.334s 00:24:06.632 sys 0m1.493s 00:24:06.632 15:04:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:06.632 15:04:52 -- common/autotest_common.sh@10 -- # set +x 00:24:06.632 ************************************ 00:24:06.632 END TEST nvmf_shutdown_tc3 00:24:06.632 ************************************ 00:24:06.632 15:04:52 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:24:06.632 00:24:06.632 real 0m27.400s 00:24:06.632 user 1m16.164s 00:24:06.632 sys 0m6.555s 00:24:06.632 15:04:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:06.632 15:04:52 -- common/autotest_common.sh@10 -- # set +x 00:24:06.632 ************************************ 00:24:06.632 END TEST nvmf_shutdown 00:24:06.632 ************************************ 00:24:06.632 15:04:52 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:24:06.632 15:04:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:06.632 15:04:52 -- common/autotest_common.sh@10 -- # set +x 00:24:06.632 15:04:52 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:24:06.632 15:04:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:06.632 15:04:52 -- common/autotest_common.sh@10 -- # set +x 00:24:06.632 15:04:52 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:24:06.632 15:04:52 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:06.632 15:04:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:06.632 15:04:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:06.632 15:04:52 -- common/autotest_common.sh@10 -- # set +x 00:24:06.890 ************************************ 00:24:06.890 START TEST nvmf_multicontroller 00:24:06.890 ************************************ 00:24:06.890 15:04:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:06.890 * Looking for test storage... 00:24:06.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:06.890 15:04:52 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.890 15:04:52 -- nvmf/common.sh@7 -- # uname -s 00:24:06.890 15:04:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.890 15:04:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.890 15:04:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.890 15:04:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.890 15:04:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.890 15:04:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.890 15:04:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.890 15:04:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.890 15:04:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.890 15:04:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.890 15:04:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:06.890 15:04:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:06.890 15:04:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.890 15:04:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.890 15:04:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.890 15:04:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.890 15:04:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.890 15:04:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.890 15:04:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.890 15:04:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.891 15:04:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.891 15:04:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.891 15:04:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.891 15:04:52 -- paths/export.sh@5 -- # export PATH 00:24:06.891 15:04:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.891 15:04:52 -- nvmf/common.sh@47 -- # : 0 00:24:06.891 15:04:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:06.891 15:04:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:06.891 15:04:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.891 15:04:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.891 15:04:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.891 15:04:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:06.891 15:04:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:06.891 15:04:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:06.891 15:04:52 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:06.891 15:04:52 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:06.891 15:04:52 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:06.891 15:04:52 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:06.891 15:04:52 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:06.891 15:04:52 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:06.891 15:04:52 -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:06.891 15:04:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:06.891 15:04:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.891 15:04:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:06.891 15:04:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:06.891 15:04:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:06.891 15:04:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.891 15:04:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:06.891 15:04:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.891 15:04:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:06.891 15:04:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:06.891 15:04:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:06.891 15:04:52 -- common/autotest_common.sh@10 -- # set +x 00:24:08.790 15:04:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:08.790 15:04:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:08.790 15:04:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:08.790 15:04:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:08.790 15:04:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:08.790 15:04:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:08.790 15:04:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:08.790 15:04:54 -- nvmf/common.sh@295 -- # net_devs=() 00:24:08.790 15:04:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:08.790 15:04:54 -- nvmf/common.sh@296 -- # e810=() 00:24:08.790 15:04:54 -- nvmf/common.sh@296 -- # local -ga e810 00:24:08.790 15:04:54 -- nvmf/common.sh@297 -- # x722=() 00:24:08.790 15:04:54 -- nvmf/common.sh@297 -- # local -ga x722 00:24:08.790 15:04:54 -- nvmf/common.sh@298 -- # mlx=() 00:24:08.790 15:04:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:08.790 15:04:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.790 15:04:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.790 15:04:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.790 15:04:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.790 15:04:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.790 15:04:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.790 15:04:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.790 15:04:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.790 15:04:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.790 15:04:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.790 15:04:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.790 15:04:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:08.790 15:04:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:08.790 15:04:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:08.790 15:04:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:08.790 15:04:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:08.790 15:04:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:08.790 15:04:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.790 15:04:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:08.790 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:08.790 15:04:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:08.790 15:04:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:08.790 15:04:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.790 15:04:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.790 15:04:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:08.790 15:04:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.790 15:04:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:08.790 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:08.790 15:04:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:08.790 15:04:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:08.790 15:04:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.790 15:04:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.790 15:04:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:08.790 15:04:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:08.790 15:04:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:08.790 15:04:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:08.790 15:04:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.790 15:04:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.790 15:04:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:08.790 15:04:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.790 15:04:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:08.790 Found net devices under 0000:84:00.0: cvl_0_0 00:24:08.790 15:04:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.790 15:04:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.790 15:04:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.790 15:04:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:08.790 15:04:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.790 15:04:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:08.790 Found net devices under 0000:84:00.1: cvl_0_1 00:24:08.790 15:04:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.790 15:04:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:08.790 15:04:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:08.790 15:04:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:08.790 15:04:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:08.790 15:04:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:08.790 15:04:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.790 15:04:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.790 15:04:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.790 15:04:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:08.790 15:04:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.790 15:04:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.790 15:04:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:08.790 15:04:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.790 15:04:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.790 15:04:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:08.790 15:04:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:08.790 15:04:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.790 15:04:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.790 15:04:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.790 15:04:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.790 15:04:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:08.790 15:04:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.790 15:04:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.790 15:04:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.790 15:04:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:08.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:24:08.790 00:24:08.790 --- 10.0.0.2 ping statistics --- 00:24:08.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.790 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:24:08.790 15:04:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:24:08.791 00:24:08.791 --- 10.0.0.1 ping statistics --- 00:24:08.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.791 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:24:08.791 15:04:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.791 15:04:54 -- nvmf/common.sh@411 -- # return 0 00:24:08.791 15:04:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:08.791 15:04:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.791 15:04:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:08.791 15:04:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:08.791 15:04:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.791 15:04:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:08.791 15:04:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:09.050 15:04:54 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:09.050 15:04:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:09.050 15:04:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:09.050 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:24:09.050 15:04:54 -- nvmf/common.sh@470 -- # nvmfpid=3844566 00:24:09.050 15:04:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:09.050 15:04:54 -- nvmf/common.sh@471 -- # waitforlisten 3844566 00:24:09.050 15:04:54 -- common/autotest_common.sh@817 -- # '[' -z 3844566 ']' 00:24:09.050 15:04:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.050 15:04:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:09.050 15:04:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.050 15:04:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:09.050 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:24:09.050 [2024-04-26 15:04:54.585637] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:24:09.050 [2024-04-26 15:04:54.585708] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.050 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.050 [2024-04-26 15:04:54.622577] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:09.050 [2024-04-26 15:04:54.650548] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:09.050 [2024-04-26 15:04:54.739039] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.050 [2024-04-26 15:04:54.739106] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.050 [2024-04-26 15:04:54.739136] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.050 [2024-04-26 15:04:54.739148] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.050 [2024-04-26 15:04:54.739159] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.050 [2024-04-26 15:04:54.739246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.050 [2024-04-26 15:04:54.739315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.050 [2024-04-26 15:04:54.739318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.309 15:04:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:09.309 15:04:54 -- common/autotest_common.sh@850 -- # return 0 00:24:09.309 15:04:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:09.309 15:04:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:09.309 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:24:09.309 15:04:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.309 15:04:54 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:09.309 15:04:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.309 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:24:09.309 [2024-04-26 15:04:54.882429] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.309 15:04:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.309 15:04:54 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:09.309 15:04:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.309 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:24:09.309 Malloc0 00:24:09.309 15:04:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.309 15:04:54 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:09.309 15:04:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.309 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:24:09.309 15:04:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.309 15:04:54 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:09.309 15:04:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.309 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:24:09.309 15:04:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.309 15:04:54 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.309 15:04:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.309 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:24:09.309 [2024-04-26 15:04:54.944126] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.309 15:04:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.309 15:04:54 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:09.309 15:04:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.310 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:24:09.310 [2024-04-26 15:04:54.951957] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:09.310 15:04:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.310 15:04:54 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:09.310 15:04:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.310 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:24:09.310 Malloc1 00:24:09.310 15:04:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.310 15:04:54 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:09.310 15:04:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.310 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:24:09.310 15:04:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.310 15:04:54 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:09.310 15:04:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.310 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:24:09.310 15:04:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.310 15:04:54 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:09.310 15:04:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.310 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:24:09.310 15:04:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.310 15:04:55 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:09.310 15:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.310 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:24:09.310 15:04:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.310 15:04:55 -- host/multicontroller.sh@44 -- # bdevperf_pid=3844710 00:24:09.310 15:04:55 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:09.310 15:04:55 -- host/multicontroller.sh@47 -- # waitforlisten 3844710 /var/tmp/bdevperf.sock 00:24:09.310 15:04:55 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:09.310 15:04:55 -- common/autotest_common.sh@817 -- # '[' -z 3844710 ']' 00:24:09.310 15:04:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:09.310 15:04:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:09.310 15:04:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:09.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:09.310 15:04:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:09.310 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:24:09.568 15:04:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:09.568 15:04:55 -- common/autotest_common.sh@850 -- # return 0 00:24:09.568 15:04:55 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:09.568 15:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.568 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:24:09.826 NVMe0n1 00:24:09.827 15:04:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.827 15:04:55 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:09.827 15:04:55 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:09.827 15:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.827 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:24:09.827 15:04:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.827 1 00:24:09.827 15:04:55 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:09.827 15:04:55 -- common/autotest_common.sh@638 -- # local es=0 00:24:09.827 15:04:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:09.827 15:04:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:09.827 15:04:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:09.827 15:04:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:09.827 15:04:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:09.827 15:04:55 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:09.827 15:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.827 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:24:09.827 request: 00:24:09.827 { 00:24:09.827 "name": "NVMe0", 00:24:09.827 "trtype": "tcp", 00:24:09.827 "traddr": "10.0.0.2", 00:24:09.827 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:09.827 "hostaddr": "10.0.0.2", 00:24:09.827 "hostsvcid": "60000", 00:24:09.827 "adrfam": "ipv4", 00:24:09.827 "trsvcid": "4420", 00:24:09.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.827 "method": "bdev_nvme_attach_controller", 00:24:09.827 "req_id": 1 00:24:09.827 } 00:24:09.827 Got JSON-RPC error response 00:24:09.827 response: 00:24:09.827 { 00:24:09.827 "code": -114, 00:24:09.827 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:09.827 } 00:24:09.827 15:04:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:09.827 15:04:55 -- common/autotest_common.sh@641 -- # es=1 00:24:09.827 15:04:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:09.827 15:04:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:09.827 15:04:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:09.827 15:04:55 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:09.827 15:04:55 -- common/autotest_common.sh@638 -- # local es=0 00:24:09.827 15:04:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:09.827 15:04:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:09.827 15:04:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:09.827 15:04:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:09.827 15:04:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:09.827 15:04:55 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:09.827 15:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.827 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:24:09.827 request: 00:24:09.827 { 00:24:09.827 "name": "NVMe0", 00:24:09.827 "trtype": "tcp", 00:24:09.827 "traddr": "10.0.0.2", 00:24:09.827 "hostaddr": "10.0.0.2", 00:24:09.827 "hostsvcid": "60000", 00:24:09.827 "adrfam": "ipv4", 00:24:09.827 "trsvcid": "4420", 00:24:09.827 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:09.827 "method": "bdev_nvme_attach_controller", 00:24:09.827 "req_id": 1 00:24:09.827 } 00:24:09.827 Got JSON-RPC error response 00:24:09.827 response: 00:24:09.827 { 00:24:09.827 "code": -114, 00:24:09.827 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:09.827 } 00:24:09.827 15:04:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:09.827 15:04:55 -- common/autotest_common.sh@641 -- # es=1 00:24:09.827 15:04:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:09.827 15:04:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:09.827 15:04:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:09.827 15:04:55 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:09.827 15:04:55 -- common/autotest_common.sh@638 -- # local es=0 00:24:09.827 15:04:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:09.827 15:04:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:09.827 15:04:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:09.827 15:04:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:09.827 15:04:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:09.827 15:04:55 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:09.827 15:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.827 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:24:09.827 request: 00:24:09.827 { 00:24:09.827 "name": "NVMe0", 00:24:09.827 "trtype": "tcp", 00:24:09.827 "traddr": "10.0.0.2", 00:24:09.827 "hostaddr": "10.0.0.2", 00:24:09.827 "hostsvcid": "60000", 00:24:09.827 "adrfam": "ipv4", 00:24:09.827 "trsvcid": "4420", 00:24:09.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.827 "multipath": "disable", 00:24:09.827 "method": "bdev_nvme_attach_controller", 00:24:09.827 "req_id": 1 00:24:09.827 } 00:24:09.827 Got JSON-RPC error response 00:24:09.827 response: 00:24:09.827 { 00:24:09.827 "code": -114, 00:24:09.827 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:09.827 } 00:24:09.827 15:04:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:09.827 15:04:55 -- common/autotest_common.sh@641 -- # es=1 00:24:09.827 15:04:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:09.827 15:04:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:09.827 15:04:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:09.827 15:04:55 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:09.827 15:04:55 -- common/autotest_common.sh@638 -- # local es=0 00:24:09.827 15:04:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:09.827 15:04:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:09.827 15:04:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:09.827 15:04:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:09.827 15:04:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:09.827 15:04:55 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:09.827 15:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.827 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:24:09.827 request: 00:24:09.827 { 00:24:09.827 "name": "NVMe0", 00:24:09.827 "trtype": "tcp", 00:24:09.827 "traddr": "10.0.0.2", 00:24:09.827 "hostaddr": "10.0.0.2", 00:24:09.827 "hostsvcid": "60000", 00:24:09.827 "adrfam": "ipv4", 00:24:09.827 "trsvcid": "4420", 00:24:09.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.827 "multipath": "failover", 00:24:09.827 "method": "bdev_nvme_attach_controller", 00:24:09.827 "req_id": 1 00:24:09.827 } 00:24:09.827 Got JSON-RPC error response 00:24:09.827 response: 00:24:09.827 { 00:24:09.827 "code": -114, 00:24:09.827 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:09.827 } 00:24:09.827 15:04:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:09.827 15:04:55 -- common/autotest_common.sh@641 -- # es=1 00:24:09.827 15:04:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:09.827 15:04:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:09.827 15:04:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:09.827 15:04:55 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:09.827 15:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.827 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:24:10.086 00:24:10.086 15:04:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.086 15:04:55 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:10.086 15:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.086 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:24:10.086 15:04:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.086 15:04:55 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:10.086 15:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.086 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:24:10.086 00:24:10.086 15:04:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.086 15:04:55 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:10.086 15:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.086 15:04:55 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:10.086 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:24:10.086 15:04:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.086 15:04:55 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:10.086 15:04:55 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:11.461 0 00:24:11.461 15:04:56 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:11.461 15:04:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.461 15:04:56 -- common/autotest_common.sh@10 -- # set +x 00:24:11.461 15:04:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.461 15:04:56 -- host/multicontroller.sh@100 -- # killprocess 3844710 00:24:11.461 15:04:56 -- common/autotest_common.sh@936 -- # '[' -z 3844710 ']' 00:24:11.461 15:04:56 -- common/autotest_common.sh@940 -- # kill -0 3844710 00:24:11.461 15:04:56 -- common/autotest_common.sh@941 -- # uname 00:24:11.461 15:04:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:11.461 15:04:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3844710 00:24:11.461 15:04:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:11.461 15:04:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:11.461 15:04:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3844710' 00:24:11.461 killing process with pid 3844710 00:24:11.461 15:04:56 -- common/autotest_common.sh@955 -- # kill 3844710 00:24:11.461 15:04:56 -- common/autotest_common.sh@960 -- # wait 3844710 00:24:11.461 15:04:57 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.461 15:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.461 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:24:11.461 15:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.461 15:04:57 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:11.461 15:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.461 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:24:11.461 15:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.461 15:04:57 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:11.461 15:04:57 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:11.461 15:04:57 -- common/autotest_common.sh@1598 -- # read -r file 00:24:11.461 15:04:57 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:11.461 15:04:57 -- common/autotest_common.sh@1597 -- # sort -u 00:24:11.461 15:04:57 -- common/autotest_common.sh@1599 -- # cat 00:24:11.461 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:11.461 [2024-04-26 15:04:55.053200] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:24:11.461 [2024-04-26 15:04:55.053284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844710 ] 00:24:11.461 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.461 [2024-04-26 15:04:55.084889] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:11.461 [2024-04-26 15:04:55.113470] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.461 [2024-04-26 15:04:55.197154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.461 [2024-04-26 15:04:55.701125] bdev.c:4551:bdev_name_add: *ERROR*: Bdev name d86764b8-cd54-45c3-bef5-61234e35225a already exists 00:24:11.461 [2024-04-26 15:04:55.701169] bdev.c:7668:bdev_register: *ERROR*: Unable to add uuid:d86764b8-cd54-45c3-bef5-61234e35225a alias for bdev NVMe1n1 00:24:11.461 [2024-04-26 15:04:55.701205] bdev_nvme.c:4276:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:11.461 Running I/O for 1 seconds... 00:24:11.461 00:24:11.461 Latency(us) 00:24:11.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.461 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:11.461 NVMe0n1 : 1.00 19058.62 74.45 0.00 0.00 6706.03 2208.81 12379.02 00:24:11.461 =================================================================================================================== 00:24:11.461 Total : 19058.62 74.45 0.00 0.00 6706.03 2208.81 12379.02 00:24:11.461 Received shutdown signal, test time was about 1.000000 seconds 00:24:11.461 00:24:11.461 Latency(us) 00:24:11.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.461 =================================================================================================================== 00:24:11.461 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:11.461 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:11.461 15:04:57 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:11.461 15:04:57 -- common/autotest_common.sh@1598 -- # read -r file 00:24:11.461 15:04:57 -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:11.461 15:04:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:11.461 15:04:57 -- nvmf/common.sh@117 -- # sync 00:24:11.461 15:04:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:11.461 15:04:57 -- nvmf/common.sh@120 -- # set +e 00:24:11.461 15:04:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:11.461 15:04:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:11.461 rmmod nvme_tcp 00:24:11.461 rmmod nvme_fabrics 00:24:11.461 rmmod nvme_keyring 00:24:11.461 15:04:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:11.461 15:04:57 -- nvmf/common.sh@124 -- # set -e 00:24:11.461 15:04:57 -- nvmf/common.sh@125 -- # return 0 00:24:11.461 15:04:57 -- nvmf/common.sh@478 -- # '[' -n 3844566 ']' 00:24:11.461 15:04:57 -- nvmf/common.sh@479 -- # killprocess 3844566 00:24:11.461 15:04:57 -- common/autotest_common.sh@936 -- # '[' -z 3844566 ']' 00:24:11.461 15:04:57 -- common/autotest_common.sh@940 -- # kill -0 3844566 00:24:11.461 15:04:57 -- common/autotest_common.sh@941 -- # uname 00:24:11.461 15:04:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:11.461 15:04:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3844566 00:24:11.720 15:04:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:11.720 15:04:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:11.720 15:04:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3844566' 00:24:11.720 killing process with pid 3844566 00:24:11.720 15:04:57 -- common/autotest_common.sh@955 -- # kill 3844566 00:24:11.720 15:04:57 -- common/autotest_common.sh@960 -- # wait 3844566 00:24:11.979 15:04:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:11.979 15:04:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:11.979 15:04:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:11.979 15:04:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:11.979 15:04:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:11.979 15:04:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.979 15:04:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:11.979 15:04:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.881 15:04:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:13.881 00:24:13.881 real 0m7.103s 00:24:13.881 user 0m10.908s 00:24:13.881 sys 0m2.187s 00:24:13.881 15:04:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:13.881 15:04:59 -- common/autotest_common.sh@10 -- # set +x 00:24:13.881 ************************************ 00:24:13.881 END TEST nvmf_multicontroller 00:24:13.881 ************************************ 00:24:13.881 15:04:59 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:13.881 15:04:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:13.881 15:04:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:13.881 15:04:59 -- common/autotest_common.sh@10 -- # set +x 00:24:14.139 ************************************ 00:24:14.139 START TEST nvmf_aer 00:24:14.139 ************************************ 00:24:14.139 15:04:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:14.139 * Looking for test storage... 00:24:14.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:14.139 15:04:59 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.139 15:04:59 -- nvmf/common.sh@7 -- # uname -s 00:24:14.139 15:04:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.139 15:04:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.139 15:04:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.139 15:04:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.139 15:04:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.139 15:04:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.139 15:04:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.139 15:04:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.139 15:04:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.139 15:04:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.139 15:04:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:14.139 15:04:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:14.139 15:04:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.139 15:04:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.139 15:04:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.139 15:04:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.139 15:04:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.139 15:04:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.139 15:04:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.139 15:04:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.139 15:04:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.139 15:04:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.139 15:04:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.139 15:04:59 -- paths/export.sh@5 -- # export PATH 00:24:14.139 15:04:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.139 15:04:59 -- nvmf/common.sh@47 -- # : 0 00:24:14.139 15:04:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:14.139 15:04:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:14.139 15:04:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.139 15:04:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.139 15:04:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.139 15:04:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:14.139 15:04:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:14.139 15:04:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:14.139 15:04:59 -- host/aer.sh@11 -- # nvmftestinit 00:24:14.139 15:04:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:14.139 15:04:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.139 15:04:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:14.139 15:04:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:14.139 15:04:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:14.139 15:04:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.140 15:04:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.140 15:04:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.140 15:04:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:14.140 15:04:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:14.140 15:04:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:14.140 15:04:59 -- common/autotest_common.sh@10 -- # set +x 00:24:16.041 15:05:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:16.041 15:05:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:16.041 15:05:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:16.041 15:05:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:16.041 15:05:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:16.041 15:05:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:16.041 15:05:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:16.041 15:05:01 -- nvmf/common.sh@295 -- # net_devs=() 00:24:16.041 15:05:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:16.041 15:05:01 -- nvmf/common.sh@296 -- # e810=() 00:24:16.041 15:05:01 -- nvmf/common.sh@296 -- # local -ga e810 00:24:16.041 15:05:01 -- nvmf/common.sh@297 -- # x722=() 00:24:16.041 15:05:01 -- nvmf/common.sh@297 -- # local -ga x722 00:24:16.041 15:05:01 -- nvmf/common.sh@298 -- # mlx=() 00:24:16.041 15:05:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:16.041 15:05:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.041 15:05:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.041 15:05:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.041 15:05:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.041 15:05:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.041 15:05:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.041 15:05:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.041 15:05:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.041 15:05:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.041 15:05:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.041 15:05:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.041 15:05:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:16.041 15:05:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:16.041 15:05:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:16.041 15:05:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:16.041 15:05:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:16.041 15:05:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:16.041 15:05:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.041 15:05:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:16.041 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:16.041 15:05:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.041 15:05:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.041 15:05:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.041 15:05:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.041 15:05:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.041 15:05:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.041 15:05:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:16.041 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:16.041 15:05:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.041 15:05:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.041 15:05:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.041 15:05:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.041 15:05:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.041 15:05:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:16.041 15:05:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:16.041 15:05:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:16.041 15:05:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.041 15:05:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.041 15:05:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:16.041 15:05:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.041 15:05:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:16.041 Found net devices under 0000:84:00.0: cvl_0_0 00:24:16.041 15:05:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.041 15:05:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.041 15:05:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.041 15:05:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:16.041 15:05:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.041 15:05:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:16.041 Found net devices under 0000:84:00.1: cvl_0_1 00:24:16.041 15:05:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.041 15:05:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:16.041 15:05:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:16.041 15:05:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:16.041 15:05:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:16.041 15:05:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:16.041 15:05:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.041 15:05:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.041 15:05:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.041 15:05:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:16.041 15:05:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.041 15:05:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.041 15:05:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:16.041 15:05:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.041 15:05:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.041 15:05:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:16.041 15:05:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:16.041 15:05:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.041 15:05:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.334 15:05:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.334 15:05:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.334 15:05:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:16.334 15:05:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.334 15:05:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.334 15:05:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.334 15:05:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:16.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:24:16.334 00:24:16.334 --- 10.0.0.2 ping statistics --- 00:24:16.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.334 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:24:16.334 15:05:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:24:16.334 00:24:16.334 --- 10.0.0.1 ping statistics --- 00:24:16.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.334 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:24:16.334 15:05:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.334 15:05:01 -- nvmf/common.sh@411 -- # return 0 00:24:16.334 15:05:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:16.334 15:05:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.334 15:05:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:16.334 15:05:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:16.334 15:05:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.334 15:05:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:16.334 15:05:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:16.334 15:05:01 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:16.334 15:05:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:16.334 15:05:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:16.334 15:05:01 -- common/autotest_common.sh@10 -- # set +x 00:24:16.334 15:05:01 -- nvmf/common.sh@470 -- # nvmfpid=3846940 00:24:16.334 15:05:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:16.334 15:05:01 -- nvmf/common.sh@471 -- # waitforlisten 3846940 00:24:16.334 15:05:01 -- common/autotest_common.sh@817 -- # '[' -z 3846940 ']' 00:24:16.334 15:05:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.334 15:05:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:16.334 15:05:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.334 15:05:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:16.334 15:05:01 -- common/autotest_common.sh@10 -- # set +x 00:24:16.334 [2024-04-26 15:05:01.930664] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:24:16.334 [2024-04-26 15:05:01.930747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.334 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.334 [2024-04-26 15:05:01.967901] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:16.334 [2024-04-26 15:05:01.994365] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:16.593 [2024-04-26 15:05:02.078869] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.593 [2024-04-26 15:05:02.078920] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.593 [2024-04-26 15:05:02.078943] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.593 [2024-04-26 15:05:02.078954] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.593 [2024-04-26 15:05:02.078964] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.593 [2024-04-26 15:05:02.079047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.593 [2024-04-26 15:05:02.079078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.593 [2024-04-26 15:05:02.079153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:16.593 [2024-04-26 15:05:02.079155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.593 15:05:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:16.593 15:05:02 -- common/autotest_common.sh@850 -- # return 0 00:24:16.593 15:05:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:16.593 15:05:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:16.593 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:24:16.593 15:05:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.593 15:05:02 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:16.593 15:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.593 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:24:16.593 [2024-04-26 15:05:02.215516] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.593 15:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.593 15:05:02 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:16.593 15:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.593 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:24:16.593 Malloc0 00:24:16.593 15:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.593 15:05:02 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:16.593 15:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.593 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:24:16.593 15:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.593 15:05:02 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:16.593 15:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.593 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:24:16.593 15:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.593 15:05:02 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.593 15:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.593 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:24:16.593 [2024-04-26 15:05:02.266559] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.593 15:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.593 15:05:02 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:16.593 15:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.593 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:24:16.593 [2024-04-26 15:05:02.274293] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:16.593 [ 00:24:16.593 { 00:24:16.593 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:16.593 "subtype": "Discovery", 00:24:16.593 "listen_addresses": [], 00:24:16.593 "allow_any_host": true, 00:24:16.593 "hosts": [] 00:24:16.593 }, 00:24:16.593 { 00:24:16.593 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.593 "subtype": "NVMe", 00:24:16.593 "listen_addresses": [ 00:24:16.593 { 00:24:16.593 "transport": "TCP", 00:24:16.593 "trtype": "TCP", 00:24:16.593 "adrfam": "IPv4", 00:24:16.593 "traddr": "10.0.0.2", 00:24:16.593 "trsvcid": "4420" 00:24:16.593 } 00:24:16.593 ], 00:24:16.593 "allow_any_host": true, 00:24:16.593 "hosts": [], 00:24:16.593 "serial_number": "SPDK00000000000001", 00:24:16.593 "model_number": "SPDK bdev Controller", 00:24:16.593 "max_namespaces": 2, 00:24:16.593 "min_cntlid": 1, 00:24:16.593 "max_cntlid": 65519, 00:24:16.593 "namespaces": [ 00:24:16.593 { 00:24:16.593 "nsid": 1, 00:24:16.593 "bdev_name": "Malloc0", 00:24:16.593 "name": "Malloc0", 00:24:16.593 "nguid": "CE778DC3DD6C409CB14F5E05C56BC589", 00:24:16.593 "uuid": "ce778dc3-dd6c-409c-b14f-5e05c56bc589" 00:24:16.593 } 00:24:16.593 ] 00:24:16.593 } 00:24:16.593 ] 00:24:16.593 15:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.593 15:05:02 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:16.593 15:05:02 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:16.593 15:05:02 -- host/aer.sh@33 -- # aerpid=3846965 00:24:16.593 15:05:02 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:16.593 15:05:02 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:16.593 15:05:02 -- common/autotest_common.sh@1251 -- # local i=0 00:24:16.593 15:05:02 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:16.593 15:05:02 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:24:16.593 15:05:02 -- common/autotest_common.sh@1254 -- # i=1 00:24:16.593 15:05:02 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:24:16.593 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.852 15:05:02 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:16.852 15:05:02 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:24:16.852 15:05:02 -- common/autotest_common.sh@1254 -- # i=2 00:24:16.852 15:05:02 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:24:16.852 15:05:02 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:16.852 15:05:02 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:16.852 15:05:02 -- common/autotest_common.sh@1262 -- # return 0 00:24:16.852 15:05:02 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:16.852 15:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.852 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:24:16.852 Malloc1 00:24:16.852 15:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.852 15:05:02 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:16.852 15:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.852 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:24:16.852 15:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.852 15:05:02 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:16.852 15:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.852 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:24:16.852 Asynchronous Event Request test 00:24:16.852 Attaching to 10.0.0.2 00:24:16.852 Attached to 10.0.0.2 00:24:16.852 Registering asynchronous event callbacks... 00:24:16.852 Starting namespace attribute notice tests for all controllers... 00:24:16.852 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:16.852 aer_cb - Changed Namespace 00:24:16.852 Cleaning up... 00:24:16.852 [ 00:24:16.852 { 00:24:16.852 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:16.852 "subtype": "Discovery", 00:24:16.852 "listen_addresses": [], 00:24:16.852 "allow_any_host": true, 00:24:16.852 "hosts": [] 00:24:16.852 }, 00:24:16.852 { 00:24:16.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.852 "subtype": "NVMe", 00:24:16.852 "listen_addresses": [ 00:24:16.852 { 00:24:16.852 "transport": "TCP", 00:24:16.852 "trtype": "TCP", 00:24:16.852 "adrfam": "IPv4", 00:24:16.852 "traddr": "10.0.0.2", 00:24:16.852 "trsvcid": "4420" 00:24:16.852 } 00:24:16.852 ], 00:24:16.852 "allow_any_host": true, 00:24:16.852 "hosts": [], 00:24:16.852 "serial_number": "SPDK00000000000001", 00:24:16.852 "model_number": "SPDK bdev Controller", 00:24:16.852 "max_namespaces": 2, 00:24:16.852 "min_cntlid": 1, 00:24:16.852 "max_cntlid": 65519, 00:24:16.852 "namespaces": [ 00:24:16.852 { 00:24:16.852 "nsid": 1, 00:24:16.852 "bdev_name": "Malloc0", 00:24:16.852 "name": "Malloc0", 00:24:16.852 "nguid": "CE778DC3DD6C409CB14F5E05C56BC589", 00:24:16.852 "uuid": "ce778dc3-dd6c-409c-b14f-5e05c56bc589" 00:24:16.852 }, 00:24:16.852 { 00:24:16.852 "nsid": 2, 00:24:16.852 "bdev_name": "Malloc1", 00:24:16.852 "name": "Malloc1", 00:24:16.852 "nguid": "374862DC7BF74E3BA1C5BEF80063D2FF", 00:24:16.852 "uuid": "374862dc-7bf7-4e3b-a1c5-bef80063d2ff" 00:24:16.852 } 00:24:16.852 ] 00:24:16.852 } 00:24:16.852 ] 00:24:16.852 15:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.852 15:05:02 -- host/aer.sh@43 -- # wait 3846965 00:24:16.852 15:05:02 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:16.852 15:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.852 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:24:17.110 15:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.111 15:05:02 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:17.111 15:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.111 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:24:17.111 15:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.111 15:05:02 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:17.111 15:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.111 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:24:17.111 15:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.111 15:05:02 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:17.111 15:05:02 -- host/aer.sh@51 -- # nvmftestfini 00:24:17.111 15:05:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:17.111 15:05:02 -- nvmf/common.sh@117 -- # sync 00:24:17.111 15:05:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:17.111 15:05:02 -- nvmf/common.sh@120 -- # set +e 00:24:17.111 15:05:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:17.111 15:05:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:17.111 rmmod nvme_tcp 00:24:17.111 rmmod nvme_fabrics 00:24:17.111 rmmod nvme_keyring 00:24:17.111 15:05:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:17.111 15:05:02 -- nvmf/common.sh@124 -- # set -e 00:24:17.111 15:05:02 -- nvmf/common.sh@125 -- # return 0 00:24:17.111 15:05:02 -- nvmf/common.sh@478 -- # '[' -n 3846940 ']' 00:24:17.111 15:05:02 -- nvmf/common.sh@479 -- # killprocess 3846940 00:24:17.111 15:05:02 -- common/autotest_common.sh@936 -- # '[' -z 3846940 ']' 00:24:17.111 15:05:02 -- common/autotest_common.sh@940 -- # kill -0 3846940 00:24:17.111 15:05:02 -- common/autotest_common.sh@941 -- # uname 00:24:17.111 15:05:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:17.111 15:05:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3846940 00:24:17.111 15:05:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:17.111 15:05:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:17.111 15:05:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3846940' 00:24:17.111 killing process with pid 3846940 00:24:17.111 15:05:02 -- common/autotest_common.sh@955 -- # kill 3846940 00:24:17.111 [2024-04-26 15:05:02.711107] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:17.111 15:05:02 -- common/autotest_common.sh@960 -- # wait 3846940 00:24:17.371 15:05:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:17.371 15:05:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:17.371 15:05:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:17.371 15:05:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:17.371 15:05:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:17.371 15:05:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.371 15:05:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.371 15:05:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.280 15:05:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:19.280 00:24:19.280 real 0m5.305s 00:24:19.280 user 0m4.036s 00:24:19.280 sys 0m1.863s 00:24:19.280 15:05:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:19.280 15:05:04 -- common/autotest_common.sh@10 -- # set +x 00:24:19.280 ************************************ 00:24:19.280 END TEST nvmf_aer 00:24:19.280 ************************************ 00:24:19.280 15:05:05 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:19.280 15:05:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:19.280 15:05:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:19.280 15:05:05 -- common/autotest_common.sh@10 -- # set +x 00:24:19.539 ************************************ 00:24:19.539 START TEST nvmf_async_init 00:24:19.539 ************************************ 00:24:19.539 15:05:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:19.539 * Looking for test storage... 00:24:19.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:19.539 15:05:05 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.539 15:05:05 -- nvmf/common.sh@7 -- # uname -s 00:24:19.539 15:05:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.539 15:05:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.539 15:05:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.539 15:05:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.539 15:05:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.539 15:05:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.539 15:05:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.539 15:05:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.539 15:05:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.539 15:05:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.539 15:05:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:19.539 15:05:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:19.539 15:05:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.539 15:05:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.539 15:05:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.539 15:05:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.539 15:05:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.539 15:05:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.539 15:05:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.539 15:05:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.539 15:05:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.539 15:05:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.539 15:05:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.539 15:05:05 -- paths/export.sh@5 -- # export PATH 00:24:19.539 15:05:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.539 15:05:05 -- nvmf/common.sh@47 -- # : 0 00:24:19.539 15:05:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:19.539 15:05:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:19.539 15:05:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.539 15:05:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.539 15:05:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.539 15:05:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:19.539 15:05:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:19.539 15:05:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:19.539 15:05:05 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:19.539 15:05:05 -- host/async_init.sh@14 -- # null_block_size=512 00:24:19.539 15:05:05 -- host/async_init.sh@15 -- # null_bdev=null0 00:24:19.539 15:05:05 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:19.539 15:05:05 -- host/async_init.sh@20 -- # uuidgen 00:24:19.539 15:05:05 -- host/async_init.sh@20 -- # tr -d - 00:24:19.539 15:05:05 -- host/async_init.sh@20 -- # nguid=e97f81560a0b498d8e63bed931b31d40 00:24:19.539 15:05:05 -- host/async_init.sh@22 -- # nvmftestinit 00:24:19.539 15:05:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:19.539 15:05:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.539 15:05:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:19.539 15:05:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:19.539 15:05:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:19.539 15:05:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.539 15:05:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.539 15:05:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.539 15:05:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:19.539 15:05:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:19.539 15:05:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:19.539 15:05:05 -- common/autotest_common.sh@10 -- # set +x 00:24:21.438 15:05:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:21.438 15:05:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:21.438 15:05:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:21.438 15:05:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:21.438 15:05:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:21.438 15:05:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:21.438 15:05:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:21.438 15:05:07 -- nvmf/common.sh@295 -- # net_devs=() 00:24:21.438 15:05:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:21.438 15:05:07 -- nvmf/common.sh@296 -- # e810=() 00:24:21.438 15:05:07 -- nvmf/common.sh@296 -- # local -ga e810 00:24:21.438 15:05:07 -- nvmf/common.sh@297 -- # x722=() 00:24:21.438 15:05:07 -- nvmf/common.sh@297 -- # local -ga x722 00:24:21.438 15:05:07 -- nvmf/common.sh@298 -- # mlx=() 00:24:21.438 15:05:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:21.438 15:05:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.438 15:05:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.438 15:05:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.438 15:05:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.438 15:05:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.438 15:05:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.438 15:05:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.438 15:05:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.438 15:05:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.438 15:05:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.438 15:05:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.438 15:05:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:21.438 15:05:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:21.438 15:05:07 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:21.438 15:05:07 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:21.438 15:05:07 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:21.438 15:05:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:21.438 15:05:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.438 15:05:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:21.438 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:21.438 15:05:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.438 15:05:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.438 15:05:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.438 15:05:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.438 15:05:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.438 15:05:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.438 15:05:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:21.438 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:21.438 15:05:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.438 15:05:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.438 15:05:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.438 15:05:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.438 15:05:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.438 15:05:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:21.438 15:05:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:21.438 15:05:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:21.438 15:05:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.438 15:05:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.438 15:05:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:21.438 15:05:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.438 15:05:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:21.438 Found net devices under 0000:84:00.0: cvl_0_0 00:24:21.438 15:05:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.438 15:05:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.438 15:05:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.438 15:05:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:21.438 15:05:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.438 15:05:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:21.438 Found net devices under 0000:84:00.1: cvl_0_1 00:24:21.438 15:05:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.438 15:05:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:21.438 15:05:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:21.438 15:05:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:21.438 15:05:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:21.438 15:05:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:21.438 15:05:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.438 15:05:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.438 15:05:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.438 15:05:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:21.438 15:05:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.438 15:05:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.438 15:05:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:21.438 15:05:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.438 15:05:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.438 15:05:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:21.438 15:05:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:21.438 15:05:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.438 15:05:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.696 15:05:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.696 15:05:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.696 15:05:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:21.696 15:05:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.696 15:05:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.696 15:05:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.696 15:05:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:21.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:24:21.696 00:24:21.696 --- 10.0.0.2 ping statistics --- 00:24:21.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.696 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:24:21.696 15:05:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:24:21.696 00:24:21.696 --- 10.0.0.1 ping statistics --- 00:24:21.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.696 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:24:21.696 15:05:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.696 15:05:07 -- nvmf/common.sh@411 -- # return 0 00:24:21.696 15:05:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:21.696 15:05:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.696 15:05:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:21.696 15:05:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:21.696 15:05:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.696 15:05:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:21.696 15:05:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:21.696 15:05:07 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:21.696 15:05:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:21.696 15:05:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:21.696 15:05:07 -- common/autotest_common.sh@10 -- # set +x 00:24:21.696 15:05:07 -- nvmf/common.sh@470 -- # nvmfpid=3848927 00:24:21.696 15:05:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:21.696 15:05:07 -- nvmf/common.sh@471 -- # waitforlisten 3848927 00:24:21.696 15:05:07 -- common/autotest_common.sh@817 -- # '[' -z 3848927 ']' 00:24:21.696 15:05:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.696 15:05:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:21.696 15:05:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.696 15:05:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:21.696 15:05:07 -- common/autotest_common.sh@10 -- # set +x 00:24:21.696 [2024-04-26 15:05:07.302535] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:24:21.696 [2024-04-26 15:05:07.302610] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.696 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.696 [2024-04-26 15:05:07.339754] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:21.696 [2024-04-26 15:05:07.365790] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.955 [2024-04-26 15:05:07.455172] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.955 [2024-04-26 15:05:07.455224] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.955 [2024-04-26 15:05:07.455237] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.955 [2024-04-26 15:05:07.455249] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.955 [2024-04-26 15:05:07.455267] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.955 [2024-04-26 15:05:07.455302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.955 15:05:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:21.955 15:05:07 -- common/autotest_common.sh@850 -- # return 0 00:24:21.955 15:05:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:21.955 15:05:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:21.955 15:05:07 -- common/autotest_common.sh@10 -- # set +x 00:24:21.955 15:05:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.955 15:05:07 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:21.955 15:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.955 15:05:07 -- common/autotest_common.sh@10 -- # set +x 00:24:21.955 [2024-04-26 15:05:07.591784] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.955 15:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.955 15:05:07 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:21.955 15:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.955 15:05:07 -- common/autotest_common.sh@10 -- # set +x 00:24:21.955 null0 00:24:21.955 15:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.955 15:05:07 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:21.955 15:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.955 15:05:07 -- common/autotest_common.sh@10 -- # set +x 00:24:21.955 15:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.955 15:05:07 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:21.955 15:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.955 15:05:07 -- common/autotest_common.sh@10 -- # set +x 00:24:21.955 15:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.955 15:05:07 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e97f81560a0b498d8e63bed931b31d40 00:24:21.955 15:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.955 15:05:07 -- common/autotest_common.sh@10 -- # set +x 00:24:21.955 15:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.955 15:05:07 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:21.955 15:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.955 15:05:07 -- common/autotest_common.sh@10 -- # set +x 00:24:21.955 [2024-04-26 15:05:07.632088] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.955 15:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.955 15:05:07 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:21.955 15:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.955 15:05:07 -- common/autotest_common.sh@10 -- # set +x 00:24:22.214 nvme0n1 00:24:22.214 15:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.214 15:05:07 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:22.214 15:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.214 15:05:07 -- common/autotest_common.sh@10 -- # set +x 00:24:22.214 [ 00:24:22.214 { 00:24:22.214 "name": "nvme0n1", 00:24:22.214 "aliases": [ 00:24:22.214 "e97f8156-0a0b-498d-8e63-bed931b31d40" 00:24:22.214 ], 00:24:22.214 "product_name": "NVMe disk", 00:24:22.214 "block_size": 512, 00:24:22.214 "num_blocks": 2097152, 00:24:22.214 "uuid": "e97f8156-0a0b-498d-8e63-bed931b31d40", 00:24:22.214 "assigned_rate_limits": { 00:24:22.214 "rw_ios_per_sec": 0, 00:24:22.214 "rw_mbytes_per_sec": 0, 00:24:22.214 "r_mbytes_per_sec": 0, 00:24:22.214 "w_mbytes_per_sec": 0 00:24:22.214 }, 00:24:22.214 "claimed": false, 00:24:22.214 "zoned": false, 00:24:22.214 "supported_io_types": { 00:24:22.214 "read": true, 00:24:22.214 "write": true, 00:24:22.214 "unmap": false, 00:24:22.214 "write_zeroes": true, 00:24:22.214 "flush": true, 00:24:22.214 "reset": true, 00:24:22.214 "compare": true, 00:24:22.214 "compare_and_write": true, 00:24:22.214 "abort": true, 00:24:22.214 "nvme_admin": true, 00:24:22.214 "nvme_io": true 00:24:22.214 }, 00:24:22.214 "memory_domains": [ 00:24:22.214 { 00:24:22.214 "dma_device_id": "system", 00:24:22.214 "dma_device_type": 1 00:24:22.214 } 00:24:22.214 ], 00:24:22.214 "driver_specific": { 00:24:22.214 "nvme": [ 00:24:22.214 { 00:24:22.214 "trid": { 00:24:22.214 "trtype": "TCP", 00:24:22.214 "adrfam": "IPv4", 00:24:22.214 "traddr": "10.0.0.2", 00:24:22.214 "trsvcid": "4420", 00:24:22.214 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:22.214 }, 00:24:22.214 "ctrlr_data": { 00:24:22.214 "cntlid": 1, 00:24:22.214 "vendor_id": "0x8086", 00:24:22.214 "model_number": "SPDK bdev Controller", 00:24:22.214 "serial_number": "00000000000000000000", 00:24:22.214 "firmware_revision": "24.05", 00:24:22.214 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:22.214 "oacs": { 00:24:22.214 "security": 0, 00:24:22.214 "format": 0, 00:24:22.214 "firmware": 0, 00:24:22.214 "ns_manage": 0 00:24:22.214 }, 00:24:22.214 "multi_ctrlr": true, 00:24:22.214 "ana_reporting": false 00:24:22.214 }, 00:24:22.214 "vs": { 00:24:22.214 "nvme_version": "1.3" 00:24:22.214 }, 00:24:22.214 "ns_data": { 00:24:22.214 "id": 1, 00:24:22.214 "can_share": true 00:24:22.214 } 00:24:22.214 } 00:24:22.214 ], 00:24:22.214 "mp_policy": "active_passive" 00:24:22.214 } 00:24:22.214 } 00:24:22.214 ] 00:24:22.214 15:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.214 15:05:07 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:22.214 15:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.214 15:05:07 -- common/autotest_common.sh@10 -- # set +x 00:24:22.214 [2024-04-26 15:05:07.884601] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:22.214 [2024-04-26 15:05:07.884697] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190b140 (9): Bad file descriptor 00:24:22.473 [2024-04-26 15:05:08.027172] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:22.473 15:05:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.473 15:05:08 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:22.473 15:05:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.473 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:24:22.473 [ 00:24:22.473 { 00:24:22.473 "name": "nvme0n1", 00:24:22.473 "aliases": [ 00:24:22.473 "e97f8156-0a0b-498d-8e63-bed931b31d40" 00:24:22.473 ], 00:24:22.473 "product_name": "NVMe disk", 00:24:22.473 "block_size": 512, 00:24:22.473 "num_blocks": 2097152, 00:24:22.473 "uuid": "e97f8156-0a0b-498d-8e63-bed931b31d40", 00:24:22.473 "assigned_rate_limits": { 00:24:22.473 "rw_ios_per_sec": 0, 00:24:22.473 "rw_mbytes_per_sec": 0, 00:24:22.473 "r_mbytes_per_sec": 0, 00:24:22.473 "w_mbytes_per_sec": 0 00:24:22.473 }, 00:24:22.473 "claimed": false, 00:24:22.473 "zoned": false, 00:24:22.473 "supported_io_types": { 00:24:22.473 "read": true, 00:24:22.473 "write": true, 00:24:22.473 "unmap": false, 00:24:22.473 "write_zeroes": true, 00:24:22.473 "flush": true, 00:24:22.473 "reset": true, 00:24:22.473 "compare": true, 00:24:22.473 "compare_and_write": true, 00:24:22.473 "abort": true, 00:24:22.473 "nvme_admin": true, 00:24:22.473 "nvme_io": true 00:24:22.473 }, 00:24:22.473 "memory_domains": [ 00:24:22.473 { 00:24:22.473 "dma_device_id": "system", 00:24:22.473 "dma_device_type": 1 00:24:22.473 } 00:24:22.473 ], 00:24:22.473 "driver_specific": { 00:24:22.473 "nvme": [ 00:24:22.473 { 00:24:22.473 "trid": { 00:24:22.473 "trtype": "TCP", 00:24:22.473 "adrfam": "IPv4", 00:24:22.473 "traddr": "10.0.0.2", 00:24:22.473 "trsvcid": "4420", 00:24:22.473 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:22.473 }, 00:24:22.473 "ctrlr_data": { 00:24:22.473 "cntlid": 2, 00:24:22.473 "vendor_id": "0x8086", 00:24:22.473 "model_number": "SPDK bdev Controller", 00:24:22.473 "serial_number": "00000000000000000000", 00:24:22.473 "firmware_revision": "24.05", 00:24:22.473 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:22.473 "oacs": { 00:24:22.473 "security": 0, 00:24:22.473 "format": 0, 00:24:22.473 "firmware": 0, 00:24:22.473 "ns_manage": 0 00:24:22.473 }, 00:24:22.473 "multi_ctrlr": true, 00:24:22.473 "ana_reporting": false 00:24:22.473 }, 00:24:22.473 "vs": { 00:24:22.473 "nvme_version": "1.3" 00:24:22.473 }, 00:24:22.473 "ns_data": { 00:24:22.473 "id": 1, 00:24:22.473 "can_share": true 00:24:22.473 } 00:24:22.473 } 00:24:22.473 ], 00:24:22.473 "mp_policy": "active_passive" 00:24:22.473 } 00:24:22.473 } 00:24:22.473 ] 00:24:22.473 15:05:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.473 15:05:08 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.473 15:05:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.473 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:24:22.473 15:05:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.473 15:05:08 -- host/async_init.sh@53 -- # mktemp 00:24:22.473 15:05:08 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.vlEdgodUAy 00:24:22.473 15:05:08 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:22.473 15:05:08 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.vlEdgodUAy 00:24:22.473 15:05:08 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:22.473 15:05:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.473 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:24:22.473 15:05:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.473 15:05:08 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:22.473 15:05:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.473 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:24:22.473 [2024-04-26 15:05:08.077257] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:22.473 [2024-04-26 15:05:08.077398] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:22.473 15:05:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.473 15:05:08 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vlEdgodUAy 00:24:22.473 15:05:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.473 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:24:22.473 [2024-04-26 15:05:08.085287] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:22.473 15:05:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.473 15:05:08 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vlEdgodUAy 00:24:22.473 15:05:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.473 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:24:22.473 [2024-04-26 15:05:08.093286] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:22.473 [2024-04-26 15:05:08.093354] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:22.473 nvme0n1 00:24:22.473 15:05:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.473 15:05:08 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:22.473 15:05:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.473 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:24:22.473 [ 00:24:22.473 { 00:24:22.473 "name": "nvme0n1", 00:24:22.473 "aliases": [ 00:24:22.473 "e97f8156-0a0b-498d-8e63-bed931b31d40" 00:24:22.473 ], 00:24:22.473 "product_name": "NVMe disk", 00:24:22.473 "block_size": 512, 00:24:22.473 "num_blocks": 2097152, 00:24:22.473 "uuid": "e97f8156-0a0b-498d-8e63-bed931b31d40", 00:24:22.473 "assigned_rate_limits": { 00:24:22.473 "rw_ios_per_sec": 0, 00:24:22.473 "rw_mbytes_per_sec": 0, 00:24:22.473 "r_mbytes_per_sec": 0, 00:24:22.473 "w_mbytes_per_sec": 0 00:24:22.473 }, 00:24:22.473 "claimed": false, 00:24:22.473 "zoned": false, 00:24:22.473 "supported_io_types": { 00:24:22.473 "read": true, 00:24:22.473 "write": true, 00:24:22.473 "unmap": false, 00:24:22.473 "write_zeroes": true, 00:24:22.473 "flush": true, 00:24:22.473 "reset": true, 00:24:22.473 "compare": true, 00:24:22.473 "compare_and_write": true, 00:24:22.473 "abort": true, 00:24:22.473 "nvme_admin": true, 00:24:22.473 "nvme_io": true 00:24:22.473 }, 00:24:22.473 "memory_domains": [ 00:24:22.473 { 00:24:22.473 "dma_device_id": "system", 00:24:22.473 "dma_device_type": 1 00:24:22.473 } 00:24:22.473 ], 00:24:22.473 "driver_specific": { 00:24:22.473 "nvme": [ 00:24:22.473 { 00:24:22.473 "trid": { 00:24:22.473 "trtype": "TCP", 00:24:22.473 "adrfam": "IPv4", 00:24:22.473 "traddr": "10.0.0.2", 00:24:22.473 "trsvcid": "4421", 00:24:22.473 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:22.473 }, 00:24:22.473 "ctrlr_data": { 00:24:22.473 "cntlid": 3, 00:24:22.473 "vendor_id": "0x8086", 00:24:22.473 "model_number": "SPDK bdev Controller", 00:24:22.473 "serial_number": "00000000000000000000", 00:24:22.473 "firmware_revision": "24.05", 00:24:22.473 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:22.473 "oacs": { 00:24:22.473 "security": 0, 00:24:22.473 "format": 0, 00:24:22.474 "firmware": 0, 00:24:22.474 "ns_manage": 0 00:24:22.474 }, 00:24:22.474 "multi_ctrlr": true, 00:24:22.474 "ana_reporting": false 00:24:22.474 }, 00:24:22.474 "vs": { 00:24:22.474 "nvme_version": "1.3" 00:24:22.474 }, 00:24:22.474 "ns_data": { 00:24:22.474 "id": 1, 00:24:22.474 "can_share": true 00:24:22.474 } 00:24:22.474 } 00:24:22.474 ], 00:24:22.474 "mp_policy": "active_passive" 00:24:22.474 } 00:24:22.474 } 00:24:22.474 ] 00:24:22.474 15:05:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.474 15:05:08 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.474 15:05:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.474 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:24:22.474 15:05:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.474 15:05:08 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.vlEdgodUAy 00:24:22.474 15:05:08 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:22.474 15:05:08 -- host/async_init.sh@78 -- # nvmftestfini 00:24:22.474 15:05:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:22.474 15:05:08 -- nvmf/common.sh@117 -- # sync 00:24:22.474 15:05:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:22.474 15:05:08 -- nvmf/common.sh@120 -- # set +e 00:24:22.474 15:05:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:22.474 15:05:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:22.474 rmmod nvme_tcp 00:24:22.733 rmmod nvme_fabrics 00:24:22.733 rmmod nvme_keyring 00:24:22.733 15:05:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:22.733 15:05:08 -- nvmf/common.sh@124 -- # set -e 00:24:22.733 15:05:08 -- nvmf/common.sh@125 -- # return 0 00:24:22.733 15:05:08 -- nvmf/common.sh@478 -- # '[' -n 3848927 ']' 00:24:22.733 15:05:08 -- nvmf/common.sh@479 -- # killprocess 3848927 00:24:22.733 15:05:08 -- common/autotest_common.sh@936 -- # '[' -z 3848927 ']' 00:24:22.733 15:05:08 -- common/autotest_common.sh@940 -- # kill -0 3848927 00:24:22.733 15:05:08 -- common/autotest_common.sh@941 -- # uname 00:24:22.733 15:05:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:22.733 15:05:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3848927 00:24:22.733 15:05:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:22.733 15:05:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:22.733 15:05:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3848927' 00:24:22.733 killing process with pid 3848927 00:24:22.733 15:05:08 -- common/autotest_common.sh@955 -- # kill 3848927 00:24:22.733 [2024-04-26 15:05:08.286230] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:22.733 [2024-04-26 15:05:08.286266] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:22.733 15:05:08 -- common/autotest_common.sh@960 -- # wait 3848927 00:24:22.993 15:05:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:22.993 15:05:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:22.993 15:05:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:22.993 15:05:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:22.993 15:05:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:22.993 15:05:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.993 15:05:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.993 15:05:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.896 15:05:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:24.896 00:24:24.896 real 0m5.435s 00:24:24.896 user 0m2.064s 00:24:24.896 sys 0m1.743s 00:24:24.896 15:05:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:24.896 15:05:10 -- common/autotest_common.sh@10 -- # set +x 00:24:24.896 ************************************ 00:24:24.896 END TEST nvmf_async_init 00:24:24.896 ************************************ 00:24:24.896 15:05:10 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:24.896 15:05:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:24.896 15:05:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:24.896 15:05:10 -- common/autotest_common.sh@10 -- # set +x 00:24:25.154 ************************************ 00:24:25.154 START TEST dma 00:24:25.154 ************************************ 00:24:25.154 15:05:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:25.154 * Looking for test storage... 00:24:25.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.154 15:05:10 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.154 15:05:10 -- nvmf/common.sh@7 -- # uname -s 00:24:25.154 15:05:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.154 15:05:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.154 15:05:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.154 15:05:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.154 15:05:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.154 15:05:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.154 15:05:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.154 15:05:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.154 15:05:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.154 15:05:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.154 15:05:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:25.154 15:05:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:25.154 15:05:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.154 15:05:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.154 15:05:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.154 15:05:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.154 15:05:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.154 15:05:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.154 15:05:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.154 15:05:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.154 15:05:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.155 15:05:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.155 15:05:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.155 15:05:10 -- paths/export.sh@5 -- # export PATH 00:24:25.155 15:05:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.155 15:05:10 -- nvmf/common.sh@47 -- # : 0 00:24:25.155 15:05:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.155 15:05:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.155 15:05:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.155 15:05:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.155 15:05:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.155 15:05:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.155 15:05:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.155 15:05:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.155 15:05:10 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:25.155 15:05:10 -- host/dma.sh@13 -- # exit 0 00:24:25.155 00:24:25.155 real 0m0.075s 00:24:25.155 user 0m0.035s 00:24:25.155 sys 0m0.046s 00:24:25.155 15:05:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:25.155 15:05:10 -- common/autotest_common.sh@10 -- # set +x 00:24:25.155 ************************************ 00:24:25.155 END TEST dma 00:24:25.155 ************************************ 00:24:25.155 15:05:10 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:25.155 15:05:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:25.155 15:05:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:25.155 15:05:10 -- common/autotest_common.sh@10 -- # set +x 00:24:25.155 ************************************ 00:24:25.155 START TEST nvmf_identify 00:24:25.155 ************************************ 00:24:25.155 15:05:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:25.413 * Looking for test storage... 00:24:25.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.413 15:05:10 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.413 15:05:10 -- nvmf/common.sh@7 -- # uname -s 00:24:25.413 15:05:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.413 15:05:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.413 15:05:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.413 15:05:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.413 15:05:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.413 15:05:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.413 15:05:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.413 15:05:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.413 15:05:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.413 15:05:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.413 15:05:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:25.413 15:05:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:25.413 15:05:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.413 15:05:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.413 15:05:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.413 15:05:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.413 15:05:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.413 15:05:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.413 15:05:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.413 15:05:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.413 15:05:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.413 15:05:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.413 15:05:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.413 15:05:10 -- paths/export.sh@5 -- # export PATH 00:24:25.413 15:05:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.414 15:05:10 -- nvmf/common.sh@47 -- # : 0 00:24:25.414 15:05:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.414 15:05:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.414 15:05:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.414 15:05:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.414 15:05:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.414 15:05:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.414 15:05:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.414 15:05:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.414 15:05:10 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:25.414 15:05:10 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:25.414 15:05:10 -- host/identify.sh@14 -- # nvmftestinit 00:24:25.414 15:05:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:25.414 15:05:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.414 15:05:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:25.414 15:05:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:25.414 15:05:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:25.414 15:05:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.414 15:05:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.414 15:05:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.414 15:05:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:25.414 15:05:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:25.414 15:05:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:25.414 15:05:10 -- common/autotest_common.sh@10 -- # set +x 00:24:27.312 15:05:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:27.312 15:05:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:27.312 15:05:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:27.312 15:05:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:27.312 15:05:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:27.312 15:05:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:27.312 15:05:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:27.312 15:05:12 -- nvmf/common.sh@295 -- # net_devs=() 00:24:27.312 15:05:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:27.312 15:05:12 -- nvmf/common.sh@296 -- # e810=() 00:24:27.312 15:05:12 -- nvmf/common.sh@296 -- # local -ga e810 00:24:27.312 15:05:12 -- nvmf/common.sh@297 -- # x722=() 00:24:27.312 15:05:12 -- nvmf/common.sh@297 -- # local -ga x722 00:24:27.312 15:05:12 -- nvmf/common.sh@298 -- # mlx=() 00:24:27.312 15:05:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:27.312 15:05:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.312 15:05:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.312 15:05:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.312 15:05:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.312 15:05:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.312 15:05:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.312 15:05:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.312 15:05:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.312 15:05:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.312 15:05:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.312 15:05:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.312 15:05:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:27.312 15:05:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:27.312 15:05:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:27.312 15:05:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:27.312 15:05:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:27.312 15:05:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:27.312 15:05:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:27.312 15:05:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:27.312 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:27.312 15:05:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:27.312 15:05:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:27.312 15:05:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.312 15:05:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.312 15:05:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:27.312 15:05:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:27.312 15:05:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:27.312 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:27.312 15:05:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:27.312 15:05:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:27.312 15:05:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.312 15:05:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.312 15:05:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:27.312 15:05:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:27.312 15:05:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:27.312 15:05:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:27.312 15:05:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:27.312 15:05:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.312 15:05:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:27.312 15:05:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.312 15:05:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:27.312 Found net devices under 0000:84:00.0: cvl_0_0 00:24:27.312 15:05:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.312 15:05:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:27.312 15:05:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.312 15:05:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:27.312 15:05:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.312 15:05:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:27.312 Found net devices under 0000:84:00.1: cvl_0_1 00:24:27.312 15:05:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.312 15:05:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:27.312 15:05:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:27.312 15:05:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:27.312 15:05:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:27.312 15:05:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:27.312 15:05:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.312 15:05:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.312 15:05:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.312 15:05:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:27.312 15:05:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.312 15:05:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.312 15:05:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:27.312 15:05:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.312 15:05:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.312 15:05:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:27.312 15:05:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:27.312 15:05:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.312 15:05:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.312 15:05:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.312 15:05:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.312 15:05:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:27.312 15:05:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.312 15:05:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.312 15:05:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.312 15:05:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:27.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:24:27.312 00:24:27.312 --- 10.0.0.2 ping statistics --- 00:24:27.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.312 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:24:27.312 15:05:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:24:27.312 00:24:27.312 --- 10.0.0.1 ping statistics --- 00:24:27.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.312 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:24:27.312 15:05:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.312 15:05:12 -- nvmf/common.sh@411 -- # return 0 00:24:27.312 15:05:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:27.312 15:05:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.312 15:05:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:27.312 15:05:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:27.312 15:05:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.312 15:05:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:27.312 15:05:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:27.312 15:05:12 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:27.312 15:05:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:27.312 15:05:12 -- common/autotest_common.sh@10 -- # set +x 00:24:27.312 15:05:12 -- host/identify.sh@19 -- # nvmfpid=3851082 00:24:27.312 15:05:12 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:27.312 15:05:12 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:27.312 15:05:12 -- host/identify.sh@23 -- # waitforlisten 3851082 00:24:27.312 15:05:12 -- common/autotest_common.sh@817 -- # '[' -z 3851082 ']' 00:24:27.312 15:05:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.312 15:05:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:27.312 15:05:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.312 15:05:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:27.312 15:05:12 -- common/autotest_common.sh@10 -- # set +x 00:24:27.312 [2024-04-26 15:05:12.990160] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:24:27.312 [2024-04-26 15:05:12.990230] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.312 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.312 [2024-04-26 15:05:13.031842] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:27.570 [2024-04-26 15:05:13.064095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:27.570 [2024-04-26 15:05:13.166748] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.570 [2024-04-26 15:05:13.166824] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.570 [2024-04-26 15:05:13.166839] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.570 [2024-04-26 15:05:13.166852] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.570 [2024-04-26 15:05:13.166877] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.571 [2024-04-26 15:05:13.166995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.571 [2024-04-26 15:05:13.167072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.571 [2024-04-26 15:05:13.167112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:27.571 [2024-04-26 15:05:13.167115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.571 15:05:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:27.571 15:05:13 -- common/autotest_common.sh@850 -- # return 0 00:24:27.571 15:05:13 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:27.571 15:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.571 15:05:13 -- common/autotest_common.sh@10 -- # set +x 00:24:27.571 [2024-04-26 15:05:13.298828] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.571 15:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.571 15:05:13 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:27.571 15:05:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:27.571 15:05:13 -- common/autotest_common.sh@10 -- # set +x 00:24:27.832 15:05:13 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:27.832 15:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.832 15:05:13 -- common/autotest_common.sh@10 -- # set +x 00:24:27.832 Malloc0 00:24:27.832 15:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.832 15:05:13 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:27.832 15:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.832 15:05:13 -- common/autotest_common.sh@10 -- # set +x 00:24:27.832 15:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.832 15:05:13 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:27.832 15:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.832 15:05:13 -- common/autotest_common.sh@10 -- # set +x 00:24:27.832 15:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.832 15:05:13 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.832 15:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.832 15:05:13 -- common/autotest_common.sh@10 -- # set +x 00:24:27.832 [2024-04-26 15:05:13.376275] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.832 15:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.832 15:05:13 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:27.832 15:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.832 15:05:13 -- common/autotest_common.sh@10 -- # set +x 00:24:27.832 15:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.832 15:05:13 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:27.832 15:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.832 15:05:13 -- common/autotest_common.sh@10 -- # set +x 00:24:27.832 [2024-04-26 15:05:13.391958] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:27.832 [ 00:24:27.832 { 00:24:27.832 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:27.832 "subtype": "Discovery", 00:24:27.832 "listen_addresses": [ 00:24:27.832 { 00:24:27.832 "transport": "TCP", 00:24:27.832 "trtype": "TCP", 00:24:27.832 "adrfam": "IPv4", 00:24:27.832 "traddr": "10.0.0.2", 00:24:27.832 "trsvcid": "4420" 00:24:27.832 } 00:24:27.832 ], 00:24:27.832 "allow_any_host": true, 00:24:27.832 "hosts": [] 00:24:27.832 }, 00:24:27.832 { 00:24:27.832 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.832 "subtype": "NVMe", 00:24:27.832 "listen_addresses": [ 00:24:27.832 { 00:24:27.832 "transport": "TCP", 00:24:27.832 "trtype": "TCP", 00:24:27.832 "adrfam": "IPv4", 00:24:27.832 "traddr": "10.0.0.2", 00:24:27.832 "trsvcid": "4420" 00:24:27.832 } 00:24:27.832 ], 00:24:27.832 "allow_any_host": true, 00:24:27.832 "hosts": [], 00:24:27.832 "serial_number": "SPDK00000000000001", 00:24:27.832 "model_number": "SPDK bdev Controller", 00:24:27.832 "max_namespaces": 32, 00:24:27.832 "min_cntlid": 1, 00:24:27.832 "max_cntlid": 65519, 00:24:27.832 "namespaces": [ 00:24:27.832 { 00:24:27.832 "nsid": 1, 00:24:27.832 "bdev_name": "Malloc0", 00:24:27.832 "name": "Malloc0", 00:24:27.832 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:27.832 "eui64": "ABCDEF0123456789", 00:24:27.832 "uuid": "2cf2facf-d863-4648-894f-bc7d32d92863" 00:24:27.832 } 00:24:27.832 ] 00:24:27.832 } 00:24:27.832 ] 00:24:27.833 15:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.833 15:05:13 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:27.833 [2024-04-26 15:05:13.418535] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:24:27.833 [2024-04-26 15:05:13.418578] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3851226 ] 00:24:27.833 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.833 [2024-04-26 15:05:13.436816] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:27.833 [2024-04-26 15:05:13.454690] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:27.833 [2024-04-26 15:05:13.454750] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:27.833 [2024-04-26 15:05:13.454760] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:27.833 [2024-04-26 15:05:13.454776] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:27.833 [2024-04-26 15:05:13.454790] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:27.833 [2024-04-26 15:05:13.458066] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:27.833 [2024-04-26 15:05:13.458130] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf6c190 0 00:24:27.833 [2024-04-26 15:05:13.465047] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:27.833 [2024-04-26 15:05:13.465069] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:27.833 [2024-04-26 15:05:13.465078] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:27.833 [2024-04-26 15:05:13.465084] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:27.833 [2024-04-26 15:05:13.465137] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.465149] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.465156] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf6c190) 00:24:27.833 [2024-04-26 15:05:13.465174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:27.833 [2024-04-26 15:05:13.465206] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd48c0, cid 0, qid 0 00:24:27.833 [2024-04-26 15:05:13.473034] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.833 [2024-04-26 15:05:13.473052] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.833 [2024-04-26 15:05:13.473060] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.473068] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd48c0) on tqpair=0xf6c190 00:24:27.833 [2024-04-26 15:05:13.473089] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:27.833 [2024-04-26 15:05:13.473101] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:27.833 [2024-04-26 15:05:13.473111] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:27.833 [2024-04-26 15:05:13.473132] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.473141] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.473148] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf6c190) 00:24:27.833 [2024-04-26 15:05:13.473159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.833 [2024-04-26 15:05:13.473184] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd48c0, cid 0, qid 0 00:24:27.833 [2024-04-26 15:05:13.473447] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.833 [2024-04-26 15:05:13.473462] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.833 [2024-04-26 15:05:13.473469] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.473476] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd48c0) on tqpair=0xf6c190 00:24:27.833 [2024-04-26 15:05:13.473484] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:27.833 [2024-04-26 15:05:13.473498] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:27.833 [2024-04-26 15:05:13.473520] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.473528] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.473534] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf6c190) 00:24:27.833 [2024-04-26 15:05:13.473545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.833 [2024-04-26 15:05:13.473566] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd48c0, cid 0, qid 0 00:24:27.833 [2024-04-26 15:05:13.473788] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.833 [2024-04-26 15:05:13.473800] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.833 [2024-04-26 15:05:13.473807] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.473814] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd48c0) on tqpair=0xf6c190 00:24:27.833 [2024-04-26 15:05:13.473822] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:27.833 [2024-04-26 15:05:13.473836] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:27.833 [2024-04-26 15:05:13.473848] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.473855] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.473862] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf6c190) 00:24:27.833 [2024-04-26 15:05:13.473872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.833 [2024-04-26 15:05:13.473892] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd48c0, cid 0, qid 0 00:24:27.833 [2024-04-26 15:05:13.474054] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.833 [2024-04-26 15:05:13.474071] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.833 [2024-04-26 15:05:13.474078] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.474085] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd48c0) on tqpair=0xf6c190 00:24:27.833 [2024-04-26 15:05:13.474095] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:27.833 [2024-04-26 15:05:13.474112] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.474122] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.474128] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf6c190) 00:24:27.833 [2024-04-26 15:05:13.474139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.833 [2024-04-26 15:05:13.474161] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd48c0, cid 0, qid 0 00:24:27.833 [2024-04-26 15:05:13.474371] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.833 [2024-04-26 15:05:13.474386] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.833 [2024-04-26 15:05:13.474393] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.474400] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd48c0) on tqpair=0xf6c190 00:24:27.833 [2024-04-26 15:05:13.474408] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:27.833 [2024-04-26 15:05:13.474417] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:27.833 [2024-04-26 15:05:13.474431] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:27.833 [2024-04-26 15:05:13.474540] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:27.833 [2024-04-26 15:05:13.474548] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:27.833 [2024-04-26 15:05:13.474567] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.474574] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.474581] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf6c190) 00:24:27.833 [2024-04-26 15:05:13.474591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.833 [2024-04-26 15:05:13.474613] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd48c0, cid 0, qid 0 00:24:27.833 [2024-04-26 15:05:13.474807] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.833 [2024-04-26 15:05:13.474822] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.833 [2024-04-26 15:05:13.474829] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.474835] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd48c0) on tqpair=0xf6c190 00:24:27.833 [2024-04-26 15:05:13.474844] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:27.833 [2024-04-26 15:05:13.474860] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.474869] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.474875] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf6c190) 00:24:27.833 [2024-04-26 15:05:13.474886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.833 [2024-04-26 15:05:13.474910] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd48c0, cid 0, qid 0 00:24:27.833 [2024-04-26 15:05:13.475070] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.833 [2024-04-26 15:05:13.475085] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.833 [2024-04-26 15:05:13.475092] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.833 [2024-04-26 15:05:13.475099] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd48c0) on tqpair=0xf6c190 00:24:27.833 [2024-04-26 15:05:13.475107] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:27.833 [2024-04-26 15:05:13.475116] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:27.833 [2024-04-26 15:05:13.475130] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:27.834 [2024-04-26 15:05:13.475145] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:27.834 [2024-04-26 15:05:13.475162] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.475170] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf6c190) 00:24:27.834 [2024-04-26 15:05:13.475181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.834 [2024-04-26 15:05:13.475203] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd48c0, cid 0, qid 0 00:24:27.834 [2024-04-26 15:05:13.475405] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.834 [2024-04-26 15:05:13.475421] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.834 [2024-04-26 15:05:13.475428] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.475434] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf6c190): datao=0, datal=4096, cccid=0 00:24:27.834 [2024-04-26 15:05:13.475442] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfd48c0) on tqpair(0xf6c190): expected_datao=0, payload_size=4096 00:24:27.834 [2024-04-26 15:05:13.475450] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.475467] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.475478] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.475570] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.834 [2024-04-26 15:05:13.475581] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.834 [2024-04-26 15:05:13.475588] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.475595] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd48c0) on tqpair=0xf6c190 00:24:27.834 [2024-04-26 15:05:13.475606] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:27.834 [2024-04-26 15:05:13.475615] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:27.834 [2024-04-26 15:05:13.475623] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:27.834 [2024-04-26 15:05:13.475631] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:27.834 [2024-04-26 15:05:13.475638] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:27.834 [2024-04-26 15:05:13.475646] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:27.834 [2024-04-26 15:05:13.475661] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:27.834 [2024-04-26 15:05:13.475677] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.475685] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.475691] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf6c190) 00:24:27.834 [2024-04-26 15:05:13.475702] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:27.834 [2024-04-26 15:05:13.475722] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd48c0, cid 0, qid 0 00:24:27.834 [2024-04-26 15:05:13.475881] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.834 [2024-04-26 15:05:13.475895] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.834 [2024-04-26 15:05:13.475902] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.475909] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd48c0) on tqpair=0xf6c190 00:24:27.834 [2024-04-26 15:05:13.475920] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.475927] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.475934] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf6c190) 00:24:27.834 [2024-04-26 15:05:13.475943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.834 [2024-04-26 15:05:13.475953] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.475960] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.475966] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf6c190) 00:24:27.834 [2024-04-26 15:05:13.475974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.834 [2024-04-26 15:05:13.475984] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.475991] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.476012] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf6c190) 00:24:27.834 [2024-04-26 15:05:13.476028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.834 [2024-04-26 15:05:13.476040] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.476047] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.476053] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf6c190) 00:24:27.834 [2024-04-26 15:05:13.476062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.834 [2024-04-26 15:05:13.476071] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:27.834 [2024-04-26 15:05:13.476091] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:27.834 [2024-04-26 15:05:13.476104] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.476112] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf6c190) 00:24:27.834 [2024-04-26 15:05:13.476122] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.834 [2024-04-26 15:05:13.476148] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd48c0, cid 0, qid 0 00:24:27.834 [2024-04-26 15:05:13.476159] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd4a20, cid 1, qid 0 00:24:27.834 [2024-04-26 15:05:13.476168] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd4b80, cid 2, qid 0 00:24:27.834 [2024-04-26 15:05:13.476179] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd4ce0, cid 3, qid 0 00:24:27.834 [2024-04-26 15:05:13.476188] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd4e40, cid 4, qid 0 00:24:27.834 [2024-04-26 15:05:13.476439] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.834 [2024-04-26 15:05:13.476454] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.834 [2024-04-26 15:05:13.476461] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.476468] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd4e40) on tqpair=0xf6c190 00:24:27.834 [2024-04-26 15:05:13.476478] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:27.834 [2024-04-26 15:05:13.476487] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:27.834 [2024-04-26 15:05:13.476505] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.476514] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf6c190) 00:24:27.834 [2024-04-26 15:05:13.476525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.834 [2024-04-26 15:05:13.476547] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd4e40, cid 4, qid 0 00:24:27.834 [2024-04-26 15:05:13.480033] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.834 [2024-04-26 15:05:13.480050] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.834 [2024-04-26 15:05:13.480057] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.480064] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf6c190): datao=0, datal=4096, cccid=4 00:24:27.834 [2024-04-26 15:05:13.480072] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfd4e40) on tqpair(0xf6c190): expected_datao=0, payload_size=4096 00:24:27.834 [2024-04-26 15:05:13.480079] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.480090] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.480098] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.480107] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.834 [2024-04-26 15:05:13.480116] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.834 [2024-04-26 15:05:13.480122] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.480129] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd4e40) on tqpair=0xf6c190 00:24:27.834 [2024-04-26 15:05:13.480150] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:27.834 [2024-04-26 15:05:13.480182] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.480191] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf6c190) 00:24:27.834 [2024-04-26 15:05:13.480203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.834 [2024-04-26 15:05:13.480214] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.480221] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.480228] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf6c190) 00:24:27.834 [2024-04-26 15:05:13.480237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.834 [2024-04-26 15:05:13.480266] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd4e40, cid 4, qid 0 00:24:27.834 [2024-04-26 15:05:13.480279] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd4fa0, cid 5, qid 0 00:24:27.834 [2024-04-26 15:05:13.480527] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.834 [2024-04-26 15:05:13.480543] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.834 [2024-04-26 15:05:13.480550] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.480556] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf6c190): datao=0, datal=1024, cccid=4 00:24:27.834 [2024-04-26 15:05:13.480564] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfd4e40) on tqpair(0xf6c190): expected_datao=0, payload_size=1024 00:24:27.834 [2024-04-26 15:05:13.480571] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.480580] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.480588] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.834 [2024-04-26 15:05:13.480596] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.834 [2024-04-26 15:05:13.480605] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.835 [2024-04-26 15:05:13.480612] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.835 [2024-04-26 15:05:13.480618] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd4fa0) on tqpair=0xf6c190 00:24:27.835 [2024-04-26 15:05:13.524037] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.835 [2024-04-26 15:05:13.524056] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.835 [2024-04-26 15:05:13.524064] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.835 [2024-04-26 15:05:13.524071] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd4e40) on tqpair=0xf6c190 00:24:27.835 [2024-04-26 15:05:13.524097] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.835 [2024-04-26 15:05:13.524107] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf6c190) 00:24:27.835 [2024-04-26 15:05:13.524119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.835 [2024-04-26 15:05:13.524151] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd4e40, cid 4, qid 0 00:24:27.835 [2024-04-26 15:05:13.524359] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.835 [2024-04-26 15:05:13.524374] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.835 [2024-04-26 15:05:13.524381] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.835 [2024-04-26 15:05:13.524388] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf6c190): datao=0, datal=3072, cccid=4 00:24:27.835 [2024-04-26 15:05:13.524395] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfd4e40) on tqpair(0xf6c190): expected_datao=0, payload_size=3072 00:24:27.835 [2024-04-26 15:05:13.524403] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.835 [2024-04-26 15:05:13.524488] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.835 [2024-04-26 15:05:13.524498] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.835 [2024-04-26 15:05:13.524657] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.835 [2024-04-26 15:05:13.524668] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.835 [2024-04-26 15:05:13.524675] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.835 [2024-04-26 15:05:13.524682] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd4e40) on tqpair=0xf6c190 00:24:27.835 [2024-04-26 15:05:13.524696] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.835 [2024-04-26 15:05:13.524704] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf6c190) 00:24:27.835 [2024-04-26 15:05:13.524715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.835 [2024-04-26 15:05:13.524743] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd4e40, cid 4, qid 0 00:24:27.835 [2024-04-26 15:05:13.524897] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.835 [2024-04-26 15:05:13.524915] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.835 [2024-04-26 15:05:13.524923] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.835 [2024-04-26 15:05:13.524937] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf6c190): datao=0, datal=8, cccid=4 00:24:27.835 [2024-04-26 15:05:13.524944] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfd4e40) on tqpair(0xf6c190): expected_datao=0, payload_size=8 00:24:27.835 [2024-04-26 15:05:13.524952] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.835 [2024-04-26 15:05:13.524961] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.835 [2024-04-26 15:05:13.524968] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.835 [2024-04-26 15:05:13.565209] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.835 [2024-04-26 15:05:13.565243] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.835 [2024-04-26 15:05:13.565256] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.835 [2024-04-26 15:05:13.565268] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd4e40) on tqpair=0xf6c190 00:24:27.835 ===================================================== 00:24:27.835 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:27.835 ===================================================== 00:24:27.835 Controller Capabilities/Features 00:24:27.835 ================================ 00:24:27.835 Vendor ID: 0000 00:24:27.835 Subsystem Vendor ID: 0000 00:24:27.835 Serial Number: .................... 00:24:27.835 Model Number: ........................................ 00:24:27.835 Firmware Version: 24.05 00:24:27.835 Recommended Arb Burst: 0 00:24:27.835 IEEE OUI Identifier: 00 00 00 00:24:27.835 Multi-path I/O 00:24:27.835 May have multiple subsystem ports: No 00:24:27.835 May have multiple controllers: No 00:24:27.835 Associated with SR-IOV VF: No 00:24:27.835 Max Data Transfer Size: 131072 00:24:27.835 Max Number of Namespaces: 0 00:24:27.835 Max Number of I/O Queues: 1024 00:24:27.835 NVMe Specification Version (VS): 1.3 00:24:27.835 NVMe Specification Version (Identify): 1.3 00:24:27.835 Maximum Queue Entries: 128 00:24:27.835 Contiguous Queues Required: Yes 00:24:27.835 Arbitration Mechanisms Supported 00:24:27.835 Weighted Round Robin: Not Supported 00:24:27.835 Vendor Specific: Not Supported 00:24:27.835 Reset Timeout: 15000 ms 00:24:27.835 Doorbell Stride: 4 bytes 00:24:27.835 NVM Subsystem Reset: Not Supported 00:24:27.835 Command Sets Supported 00:24:27.835 NVM Command Set: Supported 00:24:27.835 Boot Partition: Not Supported 00:24:27.835 Memory Page Size Minimum: 4096 bytes 00:24:27.835 Memory Page Size Maximum: 4096 bytes 00:24:27.835 Persistent Memory Region: Not Supported 00:24:27.835 Optional Asynchronous Events Supported 00:24:27.835 Namespace Attribute Notices: Not Supported 00:24:27.835 Firmware Activation Notices: Not Supported 00:24:27.835 ANA Change Notices: Not Supported 00:24:27.835 PLE Aggregate Log Change Notices: Not Supported 00:24:27.835 LBA Status Info Alert Notices: Not Supported 00:24:27.835 EGE Aggregate Log Change Notices: Not Supported 00:24:27.835 Normal NVM Subsystem Shutdown event: Not Supported 00:24:27.835 Zone Descriptor Change Notices: Not Supported 00:24:27.835 Discovery Log Change Notices: Supported 00:24:27.835 Controller Attributes 00:24:27.835 128-bit Host Identifier: Not Supported 00:24:27.835 Non-Operational Permissive Mode: Not Supported 00:24:27.835 NVM Sets: Not Supported 00:24:27.835 Read Recovery Levels: Not Supported 00:24:27.835 Endurance Groups: Not Supported 00:24:27.835 Predictable Latency Mode: Not Supported 00:24:27.835 Traffic Based Keep ALive: Not Supported 00:24:27.835 Namespace Granularity: Not Supported 00:24:27.835 SQ Associations: Not Supported 00:24:27.835 UUID List: Not Supported 00:24:27.835 Multi-Domain Subsystem: Not Supported 00:24:27.835 Fixed Capacity Management: Not Supported 00:24:27.835 Variable Capacity Management: Not Supported 00:24:27.835 Delete Endurance Group: Not Supported 00:24:27.835 Delete NVM Set: Not Supported 00:24:27.835 Extended LBA Formats Supported: Not Supported 00:24:27.835 Flexible Data Placement Supported: Not Supported 00:24:27.835 00:24:27.835 Controller Memory Buffer Support 00:24:27.835 ================================ 00:24:27.835 Supported: No 00:24:27.835 00:24:27.835 Persistent Memory Region Support 00:24:27.835 ================================ 00:24:27.835 Supported: No 00:24:27.835 00:24:27.835 Admin Command Set Attributes 00:24:27.835 ============================ 00:24:27.835 Security Send/Receive: Not Supported 00:24:27.835 Format NVM: Not Supported 00:24:27.835 Firmware Activate/Download: Not Supported 00:24:27.835 Namespace Management: Not Supported 00:24:27.835 Device Self-Test: Not Supported 00:24:27.835 Directives: Not Supported 00:24:27.835 NVMe-MI: Not Supported 00:24:27.835 Virtualization Management: Not Supported 00:24:27.835 Doorbell Buffer Config: Not Supported 00:24:27.835 Get LBA Status Capability: Not Supported 00:24:27.835 Command & Feature Lockdown Capability: Not Supported 00:24:27.835 Abort Command Limit: 1 00:24:27.835 Async Event Request Limit: 4 00:24:27.835 Number of Firmware Slots: N/A 00:24:27.835 Firmware Slot 1 Read-Only: N/A 00:24:27.835 Firmware Activation Without Reset: N/A 00:24:27.835 Multiple Update Detection Support: N/A 00:24:27.835 Firmware Update Granularity: No Information Provided 00:24:27.835 Per-Namespace SMART Log: No 00:24:27.835 Asymmetric Namespace Access Log Page: Not Supported 00:24:27.835 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:27.835 Command Effects Log Page: Not Supported 00:24:27.835 Get Log Page Extended Data: Supported 00:24:27.835 Telemetry Log Pages: Not Supported 00:24:27.835 Persistent Event Log Pages: Not Supported 00:24:27.835 Supported Log Pages Log Page: May Support 00:24:27.835 Commands Supported & Effects Log Page: Not Supported 00:24:27.835 Feature Identifiers & Effects Log Page:May Support 00:24:27.835 NVMe-MI Commands & Effects Log Page: May Support 00:24:27.835 Data Area 4 for Telemetry Log: Not Supported 00:24:27.835 Error Log Page Entries Supported: 128 00:24:27.835 Keep Alive: Not Supported 00:24:27.835 00:24:27.835 NVM Command Set Attributes 00:24:27.835 ========================== 00:24:27.835 Submission Queue Entry Size 00:24:27.835 Max: 1 00:24:27.835 Min: 1 00:24:27.835 Completion Queue Entry Size 00:24:27.835 Max: 1 00:24:27.835 Min: 1 00:24:27.835 Number of Namespaces: 0 00:24:27.835 Compare Command: Not Supported 00:24:27.835 Write Uncorrectable Command: Not Supported 00:24:27.835 Dataset Management Command: Not Supported 00:24:27.836 Write Zeroes Command: Not Supported 00:24:27.836 Set Features Save Field: Not Supported 00:24:27.836 Reservations: Not Supported 00:24:27.836 Timestamp: Not Supported 00:24:27.836 Copy: Not Supported 00:24:27.836 Volatile Write Cache: Not Present 00:24:27.836 Atomic Write Unit (Normal): 1 00:24:27.836 Atomic Write Unit (PFail): 1 00:24:27.836 Atomic Compare & Write Unit: 1 00:24:27.836 Fused Compare & Write: Supported 00:24:27.836 Scatter-Gather List 00:24:27.836 SGL Command Set: Supported 00:24:27.836 SGL Keyed: Supported 00:24:27.836 SGL Bit Bucket Descriptor: Not Supported 00:24:27.836 SGL Metadata Pointer: Not Supported 00:24:27.836 Oversized SGL: Not Supported 00:24:27.836 SGL Metadata Address: Not Supported 00:24:27.836 SGL Offset: Supported 00:24:27.836 Transport SGL Data Block: Not Supported 00:24:27.836 Replay Protected Memory Block: Not Supported 00:24:27.836 00:24:27.836 Firmware Slot Information 00:24:27.836 ========================= 00:24:27.836 Active slot: 0 00:24:27.836 00:24:27.836 00:24:27.836 Error Log 00:24:27.836 ========= 00:24:27.836 00:24:27.836 Active Namespaces 00:24:27.836 ================= 00:24:27.836 Discovery Log Page 00:24:27.836 ================== 00:24:27.836 Generation Counter: 2 00:24:27.836 Number of Records: 2 00:24:27.836 Record Format: 0 00:24:27.836 00:24:27.836 Discovery Log Entry 0 00:24:27.836 ---------------------- 00:24:27.836 Transport Type: 3 (TCP) 00:24:27.836 Address Family: 1 (IPv4) 00:24:27.836 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:27.836 Entry Flags: 00:24:27.836 Duplicate Returned Information: 1 00:24:27.836 Explicit Persistent Connection Support for Discovery: 1 00:24:27.836 Transport Requirements: 00:24:27.836 Secure Channel: Not Required 00:24:27.836 Port ID: 0 (0x0000) 00:24:27.836 Controller ID: 65535 (0xffff) 00:24:27.836 Admin Max SQ Size: 128 00:24:27.836 Transport Service Identifier: 4420 00:24:27.836 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:27.836 Transport Address: 10.0.0.2 00:24:27.836 Discovery Log Entry 1 00:24:27.836 ---------------------- 00:24:27.836 Transport Type: 3 (TCP) 00:24:27.836 Address Family: 1 (IPv4) 00:24:27.836 Subsystem Type: 2 (NVM Subsystem) 00:24:27.836 Entry Flags: 00:24:27.836 Duplicate Returned Information: 0 00:24:27.836 Explicit Persistent Connection Support for Discovery: 0 00:24:27.836 Transport Requirements: 00:24:27.836 Secure Channel: Not Required 00:24:27.836 Port ID: 0 (0x0000) 00:24:27.836 Controller ID: 65535 (0xffff) 00:24:27.836 Admin Max SQ Size: 128 00:24:27.836 Transport Service Identifier: 4420 00:24:27.836 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:27.836 Transport Address: 10.0.0.2 [2024-04-26 15:05:13.565451] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:27.836 [2024-04-26 15:05:13.565496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.836 [2024-04-26 15:05:13.565517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.836 [2024-04-26 15:05:13.565534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.836 [2024-04-26 15:05:13.565549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.836 [2024-04-26 15:05:13.565570] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.565582] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.565589] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf6c190) 00:24:27.836 [2024-04-26 15:05:13.565601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.836 [2024-04-26 15:05:13.565627] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd4ce0, cid 3, qid 0 00:24:27.836 [2024-04-26 15:05:13.565810] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.836 [2024-04-26 15:05:13.565825] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.836 [2024-04-26 15:05:13.565832] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.565839] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd4ce0) on tqpair=0xf6c190 00:24:27.836 [2024-04-26 15:05:13.565851] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.565859] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.565865] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf6c190) 00:24:27.836 [2024-04-26 15:05:13.565876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.836 [2024-04-26 15:05:13.565904] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd4ce0, cid 3, qid 0 00:24:27.836 [2024-04-26 15:05:13.566065] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.836 [2024-04-26 15:05:13.566081] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.836 [2024-04-26 15:05:13.566089] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.566096] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd4ce0) on tqpair=0xf6c190 00:24:27.836 [2024-04-26 15:05:13.566104] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:27.836 [2024-04-26 15:05:13.566118] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:27.836 [2024-04-26 15:05:13.566137] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.566146] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.566153] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf6c190) 00:24:27.836 [2024-04-26 15:05:13.566164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.836 [2024-04-26 15:05:13.566186] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd4ce0, cid 3, qid 0 00:24:27.836 [2024-04-26 15:05:13.566337] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.836 [2024-04-26 15:05:13.566352] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.836 [2024-04-26 15:05:13.566360] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.566366] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd4ce0) on tqpair=0xf6c190 00:24:27.836 [2024-04-26 15:05:13.566384] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.566394] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.566401] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf6c190) 00:24:27.836 [2024-04-26 15:05:13.566411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.836 [2024-04-26 15:05:13.566433] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd4ce0, cid 3, qid 0 00:24:27.836 [2024-04-26 15:05:13.566528] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.836 [2024-04-26 15:05:13.566543] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.836 [2024-04-26 15:05:13.566550] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.566557] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd4ce0) on tqpair=0xf6c190 00:24:27.836 [2024-04-26 15:05:13.566590] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.566600] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.566606] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf6c190) 00:24:27.836 [2024-04-26 15:05:13.566616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.836 [2024-04-26 15:05:13.566637] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd4ce0, cid 3, qid 0 00:24:27.836 [2024-04-26 15:05:13.566745] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.836 [2024-04-26 15:05:13.566760] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.836 [2024-04-26 15:05:13.566767] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.566773] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd4ce0) on tqpair=0xf6c190 00:24:27.836 [2024-04-26 15:05:13.566790] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.566800] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.566806] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf6c190) 00:24:27.836 [2024-04-26 15:05:13.566816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.836 [2024-04-26 15:05:13.566837] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd4ce0, cid 3, qid 0 00:24:27.836 [2024-04-26 15:05:13.566942] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.836 [2024-04-26 15:05:13.566954] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.836 [2024-04-26 15:05:13.566961] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.566968] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd4ce0) on tqpair=0xf6c190 00:24:27.836 [2024-04-26 15:05:13.566988] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.566998] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.836 [2024-04-26 15:05:13.567005] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf6c190) 00:24:28.100 [2024-04-26 15:05:13.567015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-04-26 15:05:13.571055] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd4ce0, cid 3, qid 0 00:24:28.100 [2024-04-26 15:05:13.571246] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.100 [2024-04-26 15:05:13.571260] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.100 [2024-04-26 15:05:13.571267] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.100 [2024-04-26 15:05:13.571274] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfd4ce0) on tqpair=0xf6c190 00:24:28.100 [2024-04-26 15:05:13.571288] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:24:28.100 00:24:28.100 15:05:13 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:28.100 [2024-04-26 15:05:13.605438] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:24:28.100 [2024-04-26 15:05:13.605482] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3851230 ] 00:24:28.100 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.100 [2024-04-26 15:05:13.623573] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:28.100 [2024-04-26 15:05:13.641139] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:28.100 [2024-04-26 15:05:13.641193] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:28.100 [2024-04-26 15:05:13.641203] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:28.100 [2024-04-26 15:05:13.641219] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:28.100 [2024-04-26 15:05:13.641233] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:28.100 [2024-04-26 15:05:13.641538] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:28.100 [2024-04-26 15:05:13.641576] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e84190 0 00:24:28.100 [2024-04-26 15:05:13.648037] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:28.100 [2024-04-26 15:05:13.648055] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:28.100 [2024-04-26 15:05:13.648064] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:28.100 [2024-04-26 15:05:13.648070] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:28.100 [2024-04-26 15:05:13.648112] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.100 [2024-04-26 15:05:13.648123] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.100 [2024-04-26 15:05:13.648130] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e84190) 00:24:28.100 [2024-04-26 15:05:13.648154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:28.100 [2024-04-26 15:05:13.648180] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec8c0, cid 0, qid 0 00:24:28.100 [2024-04-26 15:05:13.659031] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.100 [2024-04-26 15:05:13.659050] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.100 [2024-04-26 15:05:13.659058] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.100 [2024-04-26 15:05:13.659065] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eec8c0) on tqpair=0x1e84190 00:24:28.100 [2024-04-26 15:05:13.659087] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:28.100 [2024-04-26 15:05:13.659098] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:28.100 [2024-04-26 15:05:13.659108] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:28.100 [2024-04-26 15:05:13.659126] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.100 [2024-04-26 15:05:13.659135] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.100 [2024-04-26 15:05:13.659142] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e84190) 00:24:28.100 [2024-04-26 15:05:13.659154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-04-26 15:05:13.659178] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec8c0, cid 0, qid 0 00:24:28.100 [2024-04-26 15:05:13.659392] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.100 [2024-04-26 15:05:13.659407] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.100 [2024-04-26 15:05:13.659413] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.100 [2024-04-26 15:05:13.659420] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eec8c0) on tqpair=0x1e84190 00:24:28.100 [2024-04-26 15:05:13.659429] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:28.100 [2024-04-26 15:05:13.659442] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:28.100 [2024-04-26 15:05:13.659455] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.100 [2024-04-26 15:05:13.659462] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.100 [2024-04-26 15:05:13.659468] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e84190) 00:24:28.100 [2024-04-26 15:05:13.659479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-04-26 15:05:13.659500] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec8c0, cid 0, qid 0 00:24:28.100 [2024-04-26 15:05:13.659695] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.100 [2024-04-26 15:05:13.659709] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.100 [2024-04-26 15:05:13.659716] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.100 [2024-04-26 15:05:13.659722] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eec8c0) on tqpair=0x1e84190 00:24:28.100 [2024-04-26 15:05:13.659731] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:28.100 [2024-04-26 15:05:13.659745] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:28.100 [2024-04-26 15:05:13.659757] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.100 [2024-04-26 15:05:13.659764] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.100 [2024-04-26 15:05:13.659770] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e84190) 00:24:28.100 [2024-04-26 15:05:13.659780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-04-26 15:05:13.659801] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec8c0, cid 0, qid 0 00:24:28.100 [2024-04-26 15:05:13.659983] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.100 [2024-04-26 15:05:13.659997] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.100 [2024-04-26 15:05:13.660027] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.100 [2024-04-26 15:05:13.660035] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eec8c0) on tqpair=0x1e84190 00:24:28.100 [2024-04-26 15:05:13.660045] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:28.100 [2024-04-26 15:05:13.660062] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.100 [2024-04-26 15:05:13.660071] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.100 [2024-04-26 15:05:13.660077] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e84190) 00:24:28.100 [2024-04-26 15:05:13.660088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.100 [2024-04-26 15:05:13.660109] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec8c0, cid 0, qid 0 00:24:28.100 [2024-04-26 15:05:13.660318] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.100 [2024-04-26 15:05:13.660333] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.100 [2024-04-26 15:05:13.660339] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.660346] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eec8c0) on tqpair=0x1e84190 00:24:28.101 [2024-04-26 15:05:13.660355] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:28.101 [2024-04-26 15:05:13.660363] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:28.101 [2024-04-26 15:05:13.660376] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:28.101 [2024-04-26 15:05:13.660485] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:28.101 [2024-04-26 15:05:13.660492] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:28.101 [2024-04-26 15:05:13.660504] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.660511] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.660517] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e84190) 00:24:28.101 [2024-04-26 15:05:13.660528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-04-26 15:05:13.660548] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec8c0, cid 0, qid 0 00:24:28.101 [2024-04-26 15:05:13.660743] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.101 [2024-04-26 15:05:13.660757] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.101 [2024-04-26 15:05:13.660764] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.660770] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eec8c0) on tqpair=0x1e84190 00:24:28.101 [2024-04-26 15:05:13.660779] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:28.101 [2024-04-26 15:05:13.660796] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.660804] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.660811] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e84190) 00:24:28.101 [2024-04-26 15:05:13.660821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-04-26 15:05:13.660848] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec8c0, cid 0, qid 0 00:24:28.101 [2024-04-26 15:05:13.660993] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.101 [2024-04-26 15:05:13.661007] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.101 [2024-04-26 15:05:13.661014] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.661044] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eec8c0) on tqpair=0x1e84190 00:24:28.101 [2024-04-26 15:05:13.661054] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:28.101 [2024-04-26 15:05:13.661062] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:28.101 [2024-04-26 15:05:13.661076] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:28.101 [2024-04-26 15:05:13.661090] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:28.101 [2024-04-26 15:05:13.661105] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.661114] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e84190) 00:24:28.101 [2024-04-26 15:05:13.661125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.101 [2024-04-26 15:05:13.661146] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec8c0, cid 0, qid 0 00:24:28.101 [2024-04-26 15:05:13.661372] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.101 [2024-04-26 15:05:13.661387] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.101 [2024-04-26 15:05:13.661393] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.661399] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e84190): datao=0, datal=4096, cccid=0 00:24:28.101 [2024-04-26 15:05:13.661407] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eec8c0) on tqpair(0x1e84190): expected_datao=0, payload_size=4096 00:24:28.101 [2024-04-26 15:05:13.661414] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.661431] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.661440] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.661539] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.101 [2024-04-26 15:05:13.661553] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.101 [2024-04-26 15:05:13.661559] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.661566] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eec8c0) on tqpair=0x1e84190 00:24:28.101 [2024-04-26 15:05:13.661577] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:28.101 [2024-04-26 15:05:13.661585] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:28.101 [2024-04-26 15:05:13.661592] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:28.101 [2024-04-26 15:05:13.661598] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:28.101 [2024-04-26 15:05:13.661606] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:28.101 [2024-04-26 15:05:13.661613] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:28.101 [2024-04-26 15:05:13.661627] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:28.101 [2024-04-26 15:05:13.661639] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.661646] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.661655] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e84190) 00:24:28.101 [2024-04-26 15:05:13.661666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:28.101 [2024-04-26 15:05:13.661687] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec8c0, cid 0, qid 0 00:24:28.101 [2024-04-26 15:05:13.661837] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.101 [2024-04-26 15:05:13.661852] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.101 [2024-04-26 15:05:13.661858] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.661865] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eec8c0) on tqpair=0x1e84190 00:24:28.101 [2024-04-26 15:05:13.661876] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.661883] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.661889] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e84190) 00:24:28.101 [2024-04-26 15:05:13.661898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.101 [2024-04-26 15:05:13.661908] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.661915] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.661921] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e84190) 00:24:28.101 [2024-04-26 15:05:13.661929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.101 [2024-04-26 15:05:13.661938] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.661945] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.661951] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e84190) 00:24:28.101 [2024-04-26 15:05:13.661959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.101 [2024-04-26 15:05:13.661968] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.661975] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.101 [2024-04-26 15:05:13.661981] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.102 [2024-04-26 15:05:13.661989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.102 [2024-04-26 15:05:13.661997] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:28.102 [2024-04-26 15:05:13.662015] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:28.102 [2024-04-26 15:05:13.662049] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.102 [2024-04-26 15:05:13.662057] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e84190) 00:24:28.102 [2024-04-26 15:05:13.662068] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-04-26 15:05:13.662091] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eec8c0, cid 0, qid 0 00:24:28.102 [2024-04-26 15:05:13.662102] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eeca20, cid 1, qid 0 00:24:28.102 [2024-04-26 15:05:13.662109] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecb80, cid 2, qid 0 00:24:28.102 [2024-04-26 15:05:13.662117] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.102 [2024-04-26 15:05:13.662124] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eece40, cid 4, qid 0 00:24:28.102 [2024-04-26 15:05:13.662370] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.102 [2024-04-26 15:05:13.662385] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.102 [2024-04-26 15:05:13.662391] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.102 [2024-04-26 15:05:13.662398] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eece40) on tqpair=0x1e84190 00:24:28.102 [2024-04-26 15:05:13.662407] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:28.102 [2024-04-26 15:05:13.662415] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:28.102 [2024-04-26 15:05:13.662433] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:28.102 [2024-04-26 15:05:13.662445] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:28.102 [2024-04-26 15:05:13.662455] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.102 [2024-04-26 15:05:13.662462] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.102 [2024-04-26 15:05:13.662468] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e84190) 00:24:28.102 [2024-04-26 15:05:13.662478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:28.102 [2024-04-26 15:05:13.662499] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eece40, cid 4, qid 0 00:24:28.102 [2024-04-26 15:05:13.662691] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.102 [2024-04-26 15:05:13.662705] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.102 [2024-04-26 15:05:13.662712] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.102 [2024-04-26 15:05:13.662718] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eece40) on tqpair=0x1e84190 00:24:28.102 [2024-04-26 15:05:13.662769] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:28.102 [2024-04-26 15:05:13.662788] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:28.102 [2024-04-26 15:05:13.662802] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.102 [2024-04-26 15:05:13.662809] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e84190) 00:24:28.102 [2024-04-26 15:05:13.662819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-04-26 15:05:13.662840] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eece40, cid 4, qid 0 00:24:28.102 [2024-04-26 15:05:13.667033] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.102 [2024-04-26 15:05:13.667050] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.102 [2024-04-26 15:05:13.667058] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.102 [2024-04-26 15:05:13.667064] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e84190): datao=0, datal=4096, cccid=4 00:24:28.102 [2024-04-26 15:05:13.667072] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eece40) on tqpair(0x1e84190): expected_datao=0, payload_size=4096 00:24:28.102 [2024-04-26 15:05:13.667080] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.102 [2024-04-26 15:05:13.667090] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.102 [2024-04-26 15:05:13.667098] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.102 [2024-04-26 15:05:13.707029] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.102 [2024-04-26 15:05:13.707047] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.102 [2024-04-26 15:05:13.707058] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.102 [2024-04-26 15:05:13.707065] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eece40) on tqpair=0x1e84190 00:24:28.102 [2024-04-26 15:05:13.707081] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:28.102 [2024-04-26 15:05:13.707103] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:28.102 [2024-04-26 15:05:13.707122] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:28.102 [2024-04-26 15:05:13.707135] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.102 [2024-04-26 15:05:13.707143] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e84190) 00:24:28.102 [2024-04-26 15:05:13.707154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-04-26 15:05:13.707177] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eece40, cid 4, qid 0 00:24:28.102 [2024-04-26 15:05:13.707401] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.102 [2024-04-26 15:05:13.707416] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.102 [2024-04-26 15:05:13.707423] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.102 [2024-04-26 15:05:13.707429] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e84190): datao=0, datal=4096, cccid=4 00:24:28.102 [2024-04-26 15:05:13.707436] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eece40) on tqpair(0x1e84190): expected_datao=0, payload_size=4096 00:24:28.102 [2024-04-26 15:05:13.707444] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.102 [2024-04-26 15:05:13.707474] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.102 [2024-04-26 15:05:13.707484] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.102 [2024-04-26 15:05:13.749198] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.102 [2024-04-26 15:05:13.749216] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.102 [2024-04-26 15:05:13.749224] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.102 [2024-04-26 15:05:13.749231] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eece40) on tqpair=0x1e84190 00:24:28.102 [2024-04-26 15:05:13.749253] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:28.102 [2024-04-26 15:05:13.749273] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:28.102 [2024-04-26 15:05:13.749287] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.102 [2024-04-26 15:05:13.749295] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e84190) 00:24:28.102 [2024-04-26 15:05:13.749322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.102 [2024-04-26 15:05:13.749345] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eece40, cid 4, qid 0 00:24:28.102 [2024-04-26 15:05:13.749485] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.102 [2024-04-26 15:05:13.749500] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.102 [2024-04-26 15:05:13.749507] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.102 [2024-04-26 15:05:13.749514] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e84190): datao=0, datal=4096, cccid=4 00:24:28.103 [2024-04-26 15:05:13.749521] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eece40) on tqpair(0x1e84190): expected_datao=0, payload_size=4096 00:24:28.103 [2024-04-26 15:05:13.749528] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.749545] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.749558] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.791171] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.103 [2024-04-26 15:05:13.791189] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.103 [2024-04-26 15:05:13.791197] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.791204] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eece40) on tqpair=0x1e84190 00:24:28.103 [2024-04-26 15:05:13.791219] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:28.103 [2024-04-26 15:05:13.791235] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:28.103 [2024-04-26 15:05:13.791250] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:28.103 [2024-04-26 15:05:13.791261] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:28.103 [2024-04-26 15:05:13.791270] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:28.103 [2024-04-26 15:05:13.791279] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:28.103 [2024-04-26 15:05:13.791287] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:28.103 [2024-04-26 15:05:13.791295] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:28.103 [2024-04-26 15:05:13.791329] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.791338] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e84190) 00:24:28.103 [2024-04-26 15:05:13.791350] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-04-26 15:05:13.791361] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.791368] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.791389] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e84190) 00:24:28.103 [2024-04-26 15:05:13.791399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.103 [2024-04-26 15:05:13.791432] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eece40, cid 4, qid 0 00:24:28.103 [2024-04-26 15:05:13.791444] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecfa0, cid 5, qid 0 00:24:28.103 [2024-04-26 15:05:13.791612] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.103 [2024-04-26 15:05:13.791626] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.103 [2024-04-26 15:05:13.791633] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.791640] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eece40) on tqpair=0x1e84190 00:24:28.103 [2024-04-26 15:05:13.791651] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.103 [2024-04-26 15:05:13.791660] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.103 [2024-04-26 15:05:13.791666] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.791673] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecfa0) on tqpair=0x1e84190 00:24:28.103 [2024-04-26 15:05:13.791689] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.791698] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e84190) 00:24:28.103 [2024-04-26 15:05:13.791708] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-04-26 15:05:13.791739] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecfa0, cid 5, qid 0 00:24:28.103 [2024-04-26 15:05:13.791872] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.103 [2024-04-26 15:05:13.791887] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.103 [2024-04-26 15:05:13.791893] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.791900] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecfa0) on tqpair=0x1e84190 00:24:28.103 [2024-04-26 15:05:13.791917] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.791925] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e84190) 00:24:28.103 [2024-04-26 15:05:13.791935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-04-26 15:05:13.791955] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecfa0, cid 5, qid 0 00:24:28.103 [2024-04-26 15:05:13.792090] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.103 [2024-04-26 15:05:13.792104] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.103 [2024-04-26 15:05:13.792111] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.792118] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecfa0) on tqpair=0x1e84190 00:24:28.103 [2024-04-26 15:05:13.792135] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.792144] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e84190) 00:24:28.103 [2024-04-26 15:05:13.792154] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-04-26 15:05:13.792175] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecfa0, cid 5, qid 0 00:24:28.103 [2024-04-26 15:05:13.792311] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.103 [2024-04-26 15:05:13.792325] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.103 [2024-04-26 15:05:13.792332] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.792339] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecfa0) on tqpair=0x1e84190 00:24:28.103 [2024-04-26 15:05:13.792373] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.792384] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e84190) 00:24:28.103 [2024-04-26 15:05:13.792394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-04-26 15:05:13.792405] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.792413] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e84190) 00:24:28.103 [2024-04-26 15:05:13.792422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-04-26 15:05:13.792432] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.792439] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1e84190) 00:24:28.103 [2024-04-26 15:05:13.792448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-04-26 15:05:13.792459] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.792466] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e84190) 00:24:28.103 [2024-04-26 15:05:13.792475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.103 [2024-04-26 15:05:13.792500] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecfa0, cid 5, qid 0 00:24:28.103 [2024-04-26 15:05:13.792511] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eece40, cid 4, qid 0 00:24:28.103 [2024-04-26 15:05:13.792519] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eed100, cid 6, qid 0 00:24:28.103 [2024-04-26 15:05:13.792526] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eed260, cid 7, qid 0 00:24:28.103 [2024-04-26 15:05:13.792775] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.103 [2024-04-26 15:05:13.792787] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.103 [2024-04-26 15:05:13.792794] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.792800] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e84190): datao=0, datal=8192, cccid=5 00:24:28.103 [2024-04-26 15:05:13.792807] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eecfa0) on tqpair(0x1e84190): expected_datao=0, payload_size=8192 00:24:28.103 [2024-04-26 15:05:13.792814] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.792853] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.792863] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.792872] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.103 [2024-04-26 15:05:13.792880] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.103 [2024-04-26 15:05:13.792887] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.792893] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e84190): datao=0, datal=512, cccid=4 00:24:28.103 [2024-04-26 15:05:13.792900] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eece40) on tqpair(0x1e84190): expected_datao=0, payload_size=512 00:24:28.103 [2024-04-26 15:05:13.792907] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.103 [2024-04-26 15:05:13.792916] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.104 [2024-04-26 15:05:13.792923] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.104 [2024-04-26 15:05:13.792931] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.104 [2024-04-26 15:05:13.792939] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.104 [2024-04-26 15:05:13.792946] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.104 [2024-04-26 15:05:13.792951] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e84190): datao=0, datal=512, cccid=6 00:24:28.104 [2024-04-26 15:05:13.792959] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eed100) on tqpair(0x1e84190): expected_datao=0, payload_size=512 00:24:28.104 [2024-04-26 15:05:13.792965] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.104 [2024-04-26 15:05:13.792974] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.104 [2024-04-26 15:05:13.792981] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.104 [2024-04-26 15:05:13.792989] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.104 [2024-04-26 15:05:13.793012] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.104 [2024-04-26 15:05:13.793026] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.104 [2024-04-26 15:05:13.793033] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e84190): datao=0, datal=4096, cccid=7 00:24:28.104 [2024-04-26 15:05:13.793041] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eed260) on tqpair(0x1e84190): expected_datao=0, payload_size=4096 00:24:28.104 [2024-04-26 15:05:13.793048] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.104 [2024-04-26 15:05:13.793058] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.104 [2024-04-26 15:05:13.793065] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.104 [2024-04-26 15:05:13.793077] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.104 [2024-04-26 15:05:13.793086] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.104 [2024-04-26 15:05:13.793096] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.104 [2024-04-26 15:05:13.793103] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecfa0) on tqpair=0x1e84190 00:24:28.104 [2024-04-26 15:05:13.793123] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.104 [2024-04-26 15:05:13.793134] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.104 [2024-04-26 15:05:13.793141] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.104 [2024-04-26 15:05:13.793147] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eece40) on tqpair=0x1e84190 00:24:28.104 [2024-04-26 15:05:13.793163] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.104 [2024-04-26 15:05:13.793173] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.104 [2024-04-26 15:05:13.793180] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.104 [2024-04-26 15:05:13.793186] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eed100) on tqpair=0x1e84190 00:24:28.104 [2024-04-26 15:05:13.793198] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.104 [2024-04-26 15:05:13.793207] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.104 [2024-04-26 15:05:13.793213] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.104 [2024-04-26 15:05:13.793220] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eed260) on tqpair=0x1e84190 00:24:28.104 ===================================================== 00:24:28.104 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:28.104 ===================================================== 00:24:28.104 Controller Capabilities/Features 00:24:28.104 ================================ 00:24:28.104 Vendor ID: 8086 00:24:28.104 Subsystem Vendor ID: 8086 00:24:28.104 Serial Number: SPDK00000000000001 00:24:28.104 Model Number: SPDK bdev Controller 00:24:28.104 Firmware Version: 24.05 00:24:28.104 Recommended Arb Burst: 6 00:24:28.104 IEEE OUI Identifier: e4 d2 5c 00:24:28.104 Multi-path I/O 00:24:28.104 May have multiple subsystem ports: Yes 00:24:28.104 May have multiple controllers: Yes 00:24:28.104 Associated with SR-IOV VF: No 00:24:28.104 Max Data Transfer Size: 131072 00:24:28.104 Max Number of Namespaces: 32 00:24:28.104 Max Number of I/O Queues: 127 00:24:28.104 NVMe Specification Version (VS): 1.3 00:24:28.104 NVMe Specification Version (Identify): 1.3 00:24:28.104 Maximum Queue Entries: 128 00:24:28.104 Contiguous Queues Required: Yes 00:24:28.104 Arbitration Mechanisms Supported 00:24:28.104 Weighted Round Robin: Not Supported 00:24:28.104 Vendor Specific: Not Supported 00:24:28.104 Reset Timeout: 15000 ms 00:24:28.104 Doorbell Stride: 4 bytes 00:24:28.104 NVM Subsystem Reset: Not Supported 00:24:28.104 Command Sets Supported 00:24:28.104 NVM Command Set: Supported 00:24:28.104 Boot Partition: Not Supported 00:24:28.104 Memory Page Size Minimum: 4096 bytes 00:24:28.104 Memory Page Size Maximum: 4096 bytes 00:24:28.104 Persistent Memory Region: Not Supported 00:24:28.104 Optional Asynchronous Events Supported 00:24:28.104 Namespace Attribute Notices: Supported 00:24:28.104 Firmware Activation Notices: Not Supported 00:24:28.104 ANA Change Notices: Not Supported 00:24:28.104 PLE Aggregate Log Change Notices: Not Supported 00:24:28.104 LBA Status Info Alert Notices: Not Supported 00:24:28.104 EGE Aggregate Log Change Notices: Not Supported 00:24:28.104 Normal NVM Subsystem Shutdown event: Not Supported 00:24:28.104 Zone Descriptor Change Notices: Not Supported 00:24:28.104 Discovery Log Change Notices: Not Supported 00:24:28.104 Controller Attributes 00:24:28.104 128-bit Host Identifier: Supported 00:24:28.104 Non-Operational Permissive Mode: Not Supported 00:24:28.104 NVM Sets: Not Supported 00:24:28.104 Read Recovery Levels: Not Supported 00:24:28.104 Endurance Groups: Not Supported 00:24:28.104 Predictable Latency Mode: Not Supported 00:24:28.104 Traffic Based Keep ALive: Not Supported 00:24:28.104 Namespace Granularity: Not Supported 00:24:28.104 SQ Associations: Not Supported 00:24:28.104 UUID List: Not Supported 00:24:28.104 Multi-Domain Subsystem: Not Supported 00:24:28.104 Fixed Capacity Management: Not Supported 00:24:28.104 Variable Capacity Management: Not Supported 00:24:28.104 Delete Endurance Group: Not Supported 00:24:28.104 Delete NVM Set: Not Supported 00:24:28.104 Extended LBA Formats Supported: Not Supported 00:24:28.104 Flexible Data Placement Supported: Not Supported 00:24:28.104 00:24:28.104 Controller Memory Buffer Support 00:24:28.104 ================================ 00:24:28.104 Supported: No 00:24:28.104 00:24:28.104 Persistent Memory Region Support 00:24:28.104 ================================ 00:24:28.104 Supported: No 00:24:28.104 00:24:28.104 Admin Command Set Attributes 00:24:28.104 ============================ 00:24:28.104 Security Send/Receive: Not Supported 00:24:28.104 Format NVM: Not Supported 00:24:28.104 Firmware Activate/Download: Not Supported 00:24:28.104 Namespace Management: Not Supported 00:24:28.104 Device Self-Test: Not Supported 00:24:28.104 Directives: Not Supported 00:24:28.104 NVMe-MI: Not Supported 00:24:28.104 Virtualization Management: Not Supported 00:24:28.104 Doorbell Buffer Config: Not Supported 00:24:28.104 Get LBA Status Capability: Not Supported 00:24:28.104 Command & Feature Lockdown Capability: Not Supported 00:24:28.104 Abort Command Limit: 4 00:24:28.104 Async Event Request Limit: 4 00:24:28.104 Number of Firmware Slots: N/A 00:24:28.104 Firmware Slot 1 Read-Only: N/A 00:24:28.104 Firmware Activation Without Reset: N/A 00:24:28.104 Multiple Update Detection Support: N/A 00:24:28.104 Firmware Update Granularity: No Information Provided 00:24:28.104 Per-Namespace SMART Log: No 00:24:28.104 Asymmetric Namespace Access Log Page: Not Supported 00:24:28.104 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:28.104 Command Effects Log Page: Supported 00:24:28.104 Get Log Page Extended Data: Supported 00:24:28.105 Telemetry Log Pages: Not Supported 00:24:28.105 Persistent Event Log Pages: Not Supported 00:24:28.105 Supported Log Pages Log Page: May Support 00:24:28.105 Commands Supported & Effects Log Page: Not Supported 00:24:28.105 Feature Identifiers & Effects Log Page:May Support 00:24:28.105 NVMe-MI Commands & Effects Log Page: May Support 00:24:28.105 Data Area 4 for Telemetry Log: Not Supported 00:24:28.105 Error Log Page Entries Supported: 128 00:24:28.105 Keep Alive: Supported 00:24:28.105 Keep Alive Granularity: 10000 ms 00:24:28.105 00:24:28.105 NVM Command Set Attributes 00:24:28.105 ========================== 00:24:28.105 Submission Queue Entry Size 00:24:28.105 Max: 64 00:24:28.105 Min: 64 00:24:28.105 Completion Queue Entry Size 00:24:28.105 Max: 16 00:24:28.105 Min: 16 00:24:28.105 Number of Namespaces: 32 00:24:28.105 Compare Command: Supported 00:24:28.105 Write Uncorrectable Command: Not Supported 00:24:28.105 Dataset Management Command: Supported 00:24:28.105 Write Zeroes Command: Supported 00:24:28.105 Set Features Save Field: Not Supported 00:24:28.105 Reservations: Supported 00:24:28.105 Timestamp: Not Supported 00:24:28.105 Copy: Supported 00:24:28.105 Volatile Write Cache: Present 00:24:28.105 Atomic Write Unit (Normal): 1 00:24:28.105 Atomic Write Unit (PFail): 1 00:24:28.105 Atomic Compare & Write Unit: 1 00:24:28.105 Fused Compare & Write: Supported 00:24:28.105 Scatter-Gather List 00:24:28.105 SGL Command Set: Supported 00:24:28.105 SGL Keyed: Supported 00:24:28.105 SGL Bit Bucket Descriptor: Not Supported 00:24:28.105 SGL Metadata Pointer: Not Supported 00:24:28.105 Oversized SGL: Not Supported 00:24:28.105 SGL Metadata Address: Not Supported 00:24:28.105 SGL Offset: Supported 00:24:28.105 Transport SGL Data Block: Not Supported 00:24:28.105 Replay Protected Memory Block: Not Supported 00:24:28.105 00:24:28.105 Firmware Slot Information 00:24:28.105 ========================= 00:24:28.105 Active slot: 1 00:24:28.105 Slot 1 Firmware Revision: 24.05 00:24:28.105 00:24:28.105 00:24:28.105 Commands Supported and Effects 00:24:28.105 ============================== 00:24:28.105 Admin Commands 00:24:28.105 -------------- 00:24:28.105 Get Log Page (02h): Supported 00:24:28.105 Identify (06h): Supported 00:24:28.105 Abort (08h): Supported 00:24:28.105 Set Features (09h): Supported 00:24:28.105 Get Features (0Ah): Supported 00:24:28.105 Asynchronous Event Request (0Ch): Supported 00:24:28.105 Keep Alive (18h): Supported 00:24:28.105 I/O Commands 00:24:28.105 ------------ 00:24:28.105 Flush (00h): Supported LBA-Change 00:24:28.105 Write (01h): Supported LBA-Change 00:24:28.105 Read (02h): Supported 00:24:28.105 Compare (05h): Supported 00:24:28.105 Write Zeroes (08h): Supported LBA-Change 00:24:28.105 Dataset Management (09h): Supported LBA-Change 00:24:28.105 Copy (19h): Supported LBA-Change 00:24:28.105 Unknown (79h): Supported LBA-Change 00:24:28.105 Unknown (7Ah): Supported 00:24:28.105 00:24:28.105 Error Log 00:24:28.105 ========= 00:24:28.105 00:24:28.105 Arbitration 00:24:28.105 =========== 00:24:28.105 Arbitration Burst: 1 00:24:28.105 00:24:28.105 Power Management 00:24:28.105 ================ 00:24:28.105 Number of Power States: 1 00:24:28.105 Current Power State: Power State #0 00:24:28.105 Power State #0: 00:24:28.105 Max Power: 0.00 W 00:24:28.105 Non-Operational State: Operational 00:24:28.105 Entry Latency: Not Reported 00:24:28.105 Exit Latency: Not Reported 00:24:28.105 Relative Read Throughput: 0 00:24:28.105 Relative Read Latency: 0 00:24:28.105 Relative Write Throughput: 0 00:24:28.105 Relative Write Latency: 0 00:24:28.105 Idle Power: Not Reported 00:24:28.105 Active Power: Not Reported 00:24:28.105 Non-Operational Permissive Mode: Not Supported 00:24:28.105 00:24:28.105 Health Information 00:24:28.105 ================== 00:24:28.105 Critical Warnings: 00:24:28.105 Available Spare Space: OK 00:24:28.105 Temperature: OK 00:24:28.105 Device Reliability: OK 00:24:28.105 Read Only: No 00:24:28.105 Volatile Memory Backup: OK 00:24:28.105 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:28.105 Temperature Threshold: [2024-04-26 15:05:13.793358] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.105 [2024-04-26 15:05:13.793369] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e84190) 00:24:28.105 [2024-04-26 15:05:13.793380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-04-26 15:05:13.793402] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eed260, cid 7, qid 0 00:24:28.107 [2024-04-26 15:05:13.793586] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.107 [2024-04-26 15:05:13.793598] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.107 [2024-04-26 15:05:13.793605] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.793612] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eed260) on tqpair=0x1e84190 00:24:28.107 [2024-04-26 15:05:13.793652] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:28.107 [2024-04-26 15:05:13.793673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.107 [2024-04-26 15:05:13.793684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.107 [2024-04-26 15:05:13.793693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.107 [2024-04-26 15:05:13.793702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.107 [2024-04-26 15:05:13.793714] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.793722] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.793728] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.107 [2024-04-26 15:05:13.793738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-04-26 15:05:13.793758] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.107 [2024-04-26 15:05:13.793929] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.107 [2024-04-26 15:05:13.793943] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.107 [2024-04-26 15:05:13.793950] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.793960] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.107 [2024-04-26 15:05:13.793972] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.793980] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.793986] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.107 [2024-04-26 15:05:13.794010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-04-26 15:05:13.798048] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.107 [2024-04-26 15:05:13.798246] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.107 [2024-04-26 15:05:13.798262] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.107 [2024-04-26 15:05:13.798269] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.798276] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.107 [2024-04-26 15:05:13.798285] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:28.107 [2024-04-26 15:05:13.798293] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:28.107 [2024-04-26 15:05:13.798326] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.798336] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.798342] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.107 [2024-04-26 15:05:13.798352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-04-26 15:05:13.798373] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.107 [2024-04-26 15:05:13.798552] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.107 [2024-04-26 15:05:13.798566] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.107 [2024-04-26 15:05:13.798573] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.798579] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.107 [2024-04-26 15:05:13.798597] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.798606] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.798613] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.107 [2024-04-26 15:05:13.798622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-04-26 15:05:13.798643] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.107 [2024-04-26 15:05:13.798755] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.107 [2024-04-26 15:05:13.798769] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.107 [2024-04-26 15:05:13.798776] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.798783] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.107 [2024-04-26 15:05:13.798799] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.798809] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.798815] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.107 [2024-04-26 15:05:13.798825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-04-26 15:05:13.798845] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.107 [2024-04-26 15:05:13.798936] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.107 [2024-04-26 15:05:13.798953] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.107 [2024-04-26 15:05:13.798961] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.798967] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.107 [2024-04-26 15:05:13.798984] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.798993] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.799000] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.107 [2024-04-26 15:05:13.799010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-04-26 15:05:13.799054] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.107 [2024-04-26 15:05:13.799174] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.107 [2024-04-26 15:05:13.799190] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.107 [2024-04-26 15:05:13.799197] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.799203] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.107 [2024-04-26 15:05:13.799222] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.799232] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.799238] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.107 [2024-04-26 15:05:13.799249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-04-26 15:05:13.799270] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.107 [2024-04-26 15:05:13.799375] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.107 [2024-04-26 15:05:13.799390] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.107 [2024-04-26 15:05:13.799397] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.799418] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.107 [2024-04-26 15:05:13.799436] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.799445] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.799451] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.107 [2024-04-26 15:05:13.799461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.107 [2024-04-26 15:05:13.799481] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.107 [2024-04-26 15:05:13.799640] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.107 [2024-04-26 15:05:13.799651] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.107 [2024-04-26 15:05:13.799658] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.799664] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.107 [2024-04-26 15:05:13.799681] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.799690] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.107 [2024-04-26 15:05:13.799697] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.108 [2024-04-26 15:05:13.799706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-04-26 15:05:13.799726] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.108 [2024-04-26 15:05:13.799881] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.108 [2024-04-26 15:05:13.799895] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.108 [2024-04-26 15:05:13.799905] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.799912] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.108 [2024-04-26 15:05:13.799929] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.799938] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.799944] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.108 [2024-04-26 15:05:13.799954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-04-26 15:05:13.799976] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.108 [2024-04-26 15:05:13.800105] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.108 [2024-04-26 15:05:13.800120] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.108 [2024-04-26 15:05:13.800127] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.800134] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.108 [2024-04-26 15:05:13.800152] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.800162] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.800168] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.108 [2024-04-26 15:05:13.800178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-04-26 15:05:13.800200] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.108 [2024-04-26 15:05:13.800303] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.108 [2024-04-26 15:05:13.800332] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.108 [2024-04-26 15:05:13.800338] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.800345] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.108 [2024-04-26 15:05:13.800363] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.800372] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.800378] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.108 [2024-04-26 15:05:13.800388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-04-26 15:05:13.800409] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.108 [2024-04-26 15:05:13.800522] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.108 [2024-04-26 15:05:13.800536] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.108 [2024-04-26 15:05:13.800543] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.800549] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.108 [2024-04-26 15:05:13.800566] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.800575] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.800581] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.108 [2024-04-26 15:05:13.800591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-04-26 15:05:13.800612] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.108 [2024-04-26 15:05:13.800711] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.108 [2024-04-26 15:05:13.800725] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.108 [2024-04-26 15:05:13.800732] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.800741] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.108 [2024-04-26 15:05:13.800760] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.800769] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.800775] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.108 [2024-04-26 15:05:13.800785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-04-26 15:05:13.800806] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.108 [2024-04-26 15:05:13.800903] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.108 [2024-04-26 15:05:13.800917] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.108 [2024-04-26 15:05:13.800924] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.800930] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.108 [2024-04-26 15:05:13.800947] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.800957] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.800963] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.108 [2024-04-26 15:05:13.800973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-04-26 15:05:13.800993] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.108 [2024-04-26 15:05:13.801124] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.108 [2024-04-26 15:05:13.801139] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.108 [2024-04-26 15:05:13.801146] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.801153] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.108 [2024-04-26 15:05:13.801171] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.801180] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.801187] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.108 [2024-04-26 15:05:13.801197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-04-26 15:05:13.801218] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.108 [2024-04-26 15:05:13.801339] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.108 [2024-04-26 15:05:13.801354] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.108 [2024-04-26 15:05:13.801360] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.801367] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.108 [2024-04-26 15:05:13.801384] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.801393] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.801400] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.108 [2024-04-26 15:05:13.801410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-04-26 15:05:13.801430] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.108 [2024-04-26 15:05:13.801577] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.108 [2024-04-26 15:05:13.801588] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.108 [2024-04-26 15:05:13.801595] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.801601] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.108 [2024-04-26 15:05:13.801622] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.801632] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.801638] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.108 [2024-04-26 15:05:13.801648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-04-26 15:05:13.801668] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.108 [2024-04-26 15:05:13.801781] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.108 [2024-04-26 15:05:13.801795] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.108 [2024-04-26 15:05:13.801801] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.801808] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.108 [2024-04-26 15:05:13.801825] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.801834] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.801840] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.108 [2024-04-26 15:05:13.801850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.108 [2024-04-26 15:05:13.801871] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.108 [2024-04-26 15:05:13.801967] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.108 [2024-04-26 15:05:13.801981] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.108 [2024-04-26 15:05:13.801987] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.108 [2024-04-26 15:05:13.801994] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.109 [2024-04-26 15:05:13.806032] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.109 [2024-04-26 15:05:13.806047] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.109 [2024-04-26 15:05:13.806054] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e84190) 00:24:28.109 [2024-04-26 15:05:13.806064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.109 [2024-04-26 15:05:13.806087] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eecce0, cid 3, qid 0 00:24:28.109 [2024-04-26 15:05:13.806250] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.109 [2024-04-26 15:05:13.806265] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.109 [2024-04-26 15:05:13.806272] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.109 [2024-04-26 15:05:13.806279] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1eecce0) on tqpair=0x1e84190 00:24:28.109 [2024-04-26 15:05:13.806308] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:24:28.109 0 Kelvin (-273 Celsius) 00:24:28.109 Available Spare: 0% 00:24:28.109 Available Spare Threshold: 0% 00:24:28.109 Life Percentage Used: 0% 00:24:28.109 Data Units Read: 0 00:24:28.109 Data Units Written: 0 00:24:28.109 Host Read Commands: 0 00:24:28.109 Host Write Commands: 0 00:24:28.109 Controller Busy Time: 0 minutes 00:24:28.109 Power Cycles: 0 00:24:28.109 Power On Hours: 0 hours 00:24:28.109 Unsafe Shutdowns: 0 00:24:28.109 Unrecoverable Media Errors: 0 00:24:28.109 Lifetime Error Log Entries: 0 00:24:28.109 Warning Temperature Time: 0 minutes 00:24:28.109 Critical Temperature Time: 0 minutes 00:24:28.109 00:24:28.109 Number of Queues 00:24:28.109 ================ 00:24:28.109 Number of I/O Submission Queues: 127 00:24:28.109 Number of I/O Completion Queues: 127 00:24:28.109 00:24:28.109 Active Namespaces 00:24:28.109 ================= 00:24:28.109 Namespace ID:1 00:24:28.109 Error Recovery Timeout: Unlimited 00:24:28.109 Command Set Identifier: NVM (00h) 00:24:28.109 Deallocate: Supported 00:24:28.109 Deallocated/Unwritten Error: Not Supported 00:24:28.109 Deallocated Read Value: Unknown 00:24:28.109 Deallocate in Write Zeroes: Not Supported 00:24:28.109 Deallocated Guard Field: 0xFFFF 00:24:28.109 Flush: Supported 00:24:28.109 Reservation: Supported 00:24:28.109 Namespace Sharing Capabilities: Multiple Controllers 00:24:28.109 Size (in LBAs): 131072 (0GiB) 00:24:28.109 Capacity (in LBAs): 131072 (0GiB) 00:24:28.109 Utilization (in LBAs): 131072 (0GiB) 00:24:28.109 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:28.109 EUI64: ABCDEF0123456789 00:24:28.109 UUID: 2cf2facf-d863-4648-894f-bc7d32d92863 00:24:28.109 Thin Provisioning: Not Supported 00:24:28.109 Per-NS Atomic Units: Yes 00:24:28.109 Atomic Boundary Size (Normal): 0 00:24:28.109 Atomic Boundary Size (PFail): 0 00:24:28.109 Atomic Boundary Offset: 0 00:24:28.109 Maximum Single Source Range Length: 65535 00:24:28.109 Maximum Copy Length: 65535 00:24:28.109 Maximum Source Range Count: 1 00:24:28.109 NGUID/EUI64 Never Reused: No 00:24:28.109 Namespace Write Protected: No 00:24:28.109 Number of LBA Formats: 1 00:24:28.109 Current LBA Format: LBA Format #00 00:24:28.109 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:28.109 00:24:28.109 15:05:13 -- host/identify.sh@51 -- # sync 00:24:28.109 15:05:13 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:28.109 15:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.109 15:05:13 -- common/autotest_common.sh@10 -- # set +x 00:24:28.109 15:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.109 15:05:13 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:28.109 15:05:13 -- host/identify.sh@56 -- # nvmftestfini 00:24:28.109 15:05:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:28.109 15:05:13 -- nvmf/common.sh@117 -- # sync 00:24:28.109 15:05:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:28.109 15:05:13 -- nvmf/common.sh@120 -- # set +e 00:24:28.109 15:05:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:28.109 15:05:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:28.367 rmmod nvme_tcp 00:24:28.367 rmmod nvme_fabrics 00:24:28.367 rmmod nvme_keyring 00:24:28.367 15:05:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:28.367 15:05:13 -- nvmf/common.sh@124 -- # set -e 00:24:28.367 15:05:13 -- nvmf/common.sh@125 -- # return 0 00:24:28.367 15:05:13 -- nvmf/common.sh@478 -- # '[' -n 3851082 ']' 00:24:28.367 15:05:13 -- nvmf/common.sh@479 -- # killprocess 3851082 00:24:28.367 15:05:13 -- common/autotest_common.sh@936 -- # '[' -z 3851082 ']' 00:24:28.367 15:05:13 -- common/autotest_common.sh@940 -- # kill -0 3851082 00:24:28.367 15:05:13 -- common/autotest_common.sh@941 -- # uname 00:24:28.367 15:05:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:28.367 15:05:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3851082 00:24:28.367 15:05:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:28.367 15:05:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:28.367 15:05:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3851082' 00:24:28.367 killing process with pid 3851082 00:24:28.367 15:05:13 -- common/autotest_common.sh@955 -- # kill 3851082 00:24:28.367 [2024-04-26 15:05:13.905533] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:28.367 15:05:13 -- common/autotest_common.sh@960 -- # wait 3851082 00:24:28.626 15:05:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:28.626 15:05:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:28.626 15:05:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:28.626 15:05:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.626 15:05:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:28.626 15:05:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.626 15:05:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.627 15:05:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.527 15:05:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:30.527 00:24:30.527 real 0m5.341s 00:24:30.527 user 0m4.575s 00:24:30.527 sys 0m1.801s 00:24:30.527 15:05:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:30.527 15:05:16 -- common/autotest_common.sh@10 -- # set +x 00:24:30.527 ************************************ 00:24:30.527 END TEST nvmf_identify 00:24:30.527 ************************************ 00:24:30.527 15:05:16 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:30.527 15:05:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:30.527 15:05:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:30.527 15:05:16 -- common/autotest_common.sh@10 -- # set +x 00:24:30.786 ************************************ 00:24:30.786 START TEST nvmf_perf 00:24:30.786 ************************************ 00:24:30.786 15:05:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:30.786 * Looking for test storage... 00:24:30.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.786 15:05:16 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.786 15:05:16 -- nvmf/common.sh@7 -- # uname -s 00:24:30.786 15:05:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.786 15:05:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.786 15:05:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.786 15:05:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.786 15:05:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.786 15:05:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.786 15:05:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.786 15:05:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.786 15:05:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.786 15:05:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.786 15:05:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:30.786 15:05:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:30.786 15:05:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.786 15:05:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.786 15:05:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.786 15:05:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.786 15:05:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.786 15:05:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.786 15:05:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.786 15:05:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.786 15:05:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.786 15:05:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.786 15:05:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.786 15:05:16 -- paths/export.sh@5 -- # export PATH 00:24:30.786 15:05:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.786 15:05:16 -- nvmf/common.sh@47 -- # : 0 00:24:30.786 15:05:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:30.786 15:05:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:30.786 15:05:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.786 15:05:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.786 15:05:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.786 15:05:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:30.786 15:05:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:30.786 15:05:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:30.786 15:05:16 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:30.786 15:05:16 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:30.786 15:05:16 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:30.786 15:05:16 -- host/perf.sh@17 -- # nvmftestinit 00:24:30.786 15:05:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:30.786 15:05:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.786 15:05:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:30.786 15:05:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:30.786 15:05:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:30.786 15:05:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.786 15:05:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.786 15:05:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.786 15:05:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:30.786 15:05:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:30.786 15:05:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:30.786 15:05:16 -- common/autotest_common.sh@10 -- # set +x 00:24:32.687 15:05:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:32.687 15:05:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:32.687 15:05:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:32.687 15:05:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:32.687 15:05:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:32.687 15:05:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:32.687 15:05:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:32.687 15:05:18 -- nvmf/common.sh@295 -- # net_devs=() 00:24:32.687 15:05:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:32.687 15:05:18 -- nvmf/common.sh@296 -- # e810=() 00:24:32.687 15:05:18 -- nvmf/common.sh@296 -- # local -ga e810 00:24:32.687 15:05:18 -- nvmf/common.sh@297 -- # x722=() 00:24:32.687 15:05:18 -- nvmf/common.sh@297 -- # local -ga x722 00:24:32.687 15:05:18 -- nvmf/common.sh@298 -- # mlx=() 00:24:32.687 15:05:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:32.687 15:05:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.687 15:05:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.687 15:05:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.687 15:05:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.687 15:05:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.687 15:05:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.687 15:05:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.687 15:05:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.687 15:05:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.687 15:05:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.687 15:05:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.687 15:05:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:32.687 15:05:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:32.687 15:05:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:32.687 15:05:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:32.687 15:05:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:32.687 15:05:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:32.687 15:05:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.687 15:05:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:32.687 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:32.687 15:05:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.687 15:05:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.687 15:05:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.687 15:05:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.687 15:05:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.687 15:05:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.687 15:05:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:32.687 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:32.687 15:05:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.687 15:05:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.687 15:05:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.687 15:05:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.687 15:05:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.687 15:05:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:32.687 15:05:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:32.687 15:05:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:32.687 15:05:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.687 15:05:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.687 15:05:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:32.687 15:05:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.687 15:05:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:32.687 Found net devices under 0000:84:00.0: cvl_0_0 00:24:32.687 15:05:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.687 15:05:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.688 15:05:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.688 15:05:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:32.688 15:05:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.688 15:05:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:32.688 Found net devices under 0000:84:00.1: cvl_0_1 00:24:32.688 15:05:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.688 15:05:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:32.688 15:05:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:32.688 15:05:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:32.688 15:05:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:32.688 15:05:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:32.688 15:05:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.688 15:05:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.688 15:05:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.688 15:05:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:32.688 15:05:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.688 15:05:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.688 15:05:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:32.688 15:05:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.688 15:05:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.688 15:05:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:32.688 15:05:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:32.688 15:05:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.946 15:05:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.946 15:05:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.946 15:05:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.946 15:05:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:32.946 15:05:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.946 15:05:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.946 15:05:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.946 15:05:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:32.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:24:32.946 00:24:32.946 --- 10.0.0.2 ping statistics --- 00:24:32.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.946 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:24:32.946 15:05:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:24:32.946 00:24:32.946 --- 10.0.0.1 ping statistics --- 00:24:32.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.946 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:24:32.946 15:05:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.946 15:05:18 -- nvmf/common.sh@411 -- # return 0 00:24:32.946 15:05:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:32.946 15:05:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.946 15:05:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:32.946 15:05:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:32.946 15:05:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.946 15:05:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:32.946 15:05:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:32.946 15:05:18 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:32.946 15:05:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:32.946 15:05:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:32.946 15:05:18 -- common/autotest_common.sh@10 -- # set +x 00:24:32.946 15:05:18 -- nvmf/common.sh@470 -- # nvmfpid=3853181 00:24:32.946 15:05:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:32.946 15:05:18 -- nvmf/common.sh@471 -- # waitforlisten 3853181 00:24:32.946 15:05:18 -- common/autotest_common.sh@817 -- # '[' -z 3853181 ']' 00:24:32.946 15:05:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.946 15:05:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:32.946 15:05:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.946 15:05:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:32.946 15:05:18 -- common/autotest_common.sh@10 -- # set +x 00:24:32.946 [2024-04-26 15:05:18.604853] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:24:32.946 [2024-04-26 15:05:18.604937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.946 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.946 [2024-04-26 15:05:18.643464] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:32.946 [2024-04-26 15:05:18.670382] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:33.205 [2024-04-26 15:05:18.760566] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.205 [2024-04-26 15:05:18.760619] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.205 [2024-04-26 15:05:18.760649] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.205 [2024-04-26 15:05:18.760661] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.205 [2024-04-26 15:05:18.760671] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.205 [2024-04-26 15:05:18.760725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.205 [2024-04-26 15:05:18.760779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:33.205 [2024-04-26 15:05:18.760781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.205 [2024-04-26 15:05:18.760753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.205 15:05:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:33.205 15:05:18 -- common/autotest_common.sh@850 -- # return 0 00:24:33.205 15:05:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:33.205 15:05:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:33.205 15:05:18 -- common/autotest_common.sh@10 -- # set +x 00:24:33.205 15:05:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.205 15:05:18 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:33.205 15:05:18 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:36.515 15:05:21 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:36.515 15:05:21 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:36.774 15:05:22 -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:24:36.774 15:05:22 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:37.032 15:05:22 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:37.032 15:05:22 -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:24:37.032 15:05:22 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:37.032 15:05:22 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:37.032 15:05:22 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:37.032 [2024-04-26 15:05:22.742128] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.289 15:05:22 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:37.547 15:05:23 -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:37.547 15:05:23 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:37.804 15:05:23 -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:37.804 15:05:23 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:38.062 15:05:23 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.319 [2024-04-26 15:05:23.882259] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.319 15:05:23 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:38.576 15:05:24 -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:24:38.576 15:05:24 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:24:38.576 15:05:24 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:38.576 15:05:24 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:24:39.947 Initializing NVMe Controllers 00:24:39.947 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:24:39.947 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:24:39.947 Initialization complete. Launching workers. 00:24:39.947 ======================================================== 00:24:39.947 Latency(us) 00:24:39.947 Device Information : IOPS MiB/s Average min max 00:24:39.947 PCIE (0000:82:00.0) NSID 1 from core 0: 86489.67 337.85 369.62 28.29 4671.11 00:24:39.947 ======================================================== 00:24:39.947 Total : 86489.67 337.85 369.62 28.29 4671.11 00:24:39.947 00:24:39.947 15:05:25 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:39.947 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.319 Initializing NVMe Controllers 00:24:41.319 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:41.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:41.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:41.319 Initialization complete. Launching workers. 00:24:41.319 ======================================================== 00:24:41.319 Latency(us) 00:24:41.319 Device Information : IOPS MiB/s Average min max 00:24:41.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 103.00 0.40 10102.48 155.32 45027.92 00:24:41.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 58.00 0.23 17611.10 7948.88 47901.25 00:24:41.319 ======================================================== 00:24:41.319 Total : 161.00 0.63 12807.45 155.32 47901.25 00:24:41.319 00:24:41.319 15:05:26 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:41.319 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.693 Initializing NVMe Controllers 00:24:42.693 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:42.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:42.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:42.693 Initialization complete. Launching workers. 00:24:42.693 ======================================================== 00:24:42.693 Latency(us) 00:24:42.693 Device Information : IOPS MiB/s Average min max 00:24:42.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8322.54 32.51 3846.08 570.77 9860.57 00:24:42.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3841.09 15.00 8381.27 6470.50 16206.42 00:24:42.693 ======================================================== 00:24:42.693 Total : 12163.63 47.51 5278.23 570.77 16206.42 00:24:42.693 00:24:42.693 15:05:28 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:42.693 15:05:28 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:42.693 15:05:28 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:42.693 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.220 Initializing NVMe Controllers 00:24:45.220 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:45.220 Controller IO queue size 128, less than required. 00:24:45.220 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:45.220 Controller IO queue size 128, less than required. 00:24:45.220 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:45.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:45.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:45.220 Initialization complete. Launching workers. 00:24:45.220 ======================================================== 00:24:45.220 Latency(us) 00:24:45.220 Device Information : IOPS MiB/s Average min max 00:24:45.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1338.78 334.69 96830.69 52560.33 152517.59 00:24:45.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 621.47 155.37 215842.42 108960.46 314118.72 00:24:45.220 ======================================================== 00:24:45.220 Total : 1960.25 490.06 134561.68 52560.33 314118.72 00:24:45.220 00:24:45.220 15:05:30 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:45.220 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.220 No valid NVMe controllers or AIO or URING devices found 00:24:45.220 Initializing NVMe Controllers 00:24:45.220 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:45.220 Controller IO queue size 128, less than required. 00:24:45.220 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:45.220 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:45.220 Controller IO queue size 128, less than required. 00:24:45.220 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:45.220 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:45.220 WARNING: Some requested NVMe devices were skipped 00:24:45.220 15:05:30 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:45.220 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.760 Initializing NVMe Controllers 00:24:47.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:47.760 Controller IO queue size 128, less than required. 00:24:47.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:47.760 Controller IO queue size 128, less than required. 00:24:47.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:47.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:47.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:47.760 Initialization complete. Launching workers. 00:24:47.760 00:24:47.760 ==================== 00:24:47.760 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:47.760 TCP transport: 00:24:47.760 polls: 15166 00:24:47.760 idle_polls: 9896 00:24:47.760 sock_completions: 5270 00:24:47.760 nvme_completions: 4885 00:24:47.760 submitted_requests: 7372 00:24:47.760 queued_requests: 1 00:24:47.760 00:24:47.760 ==================== 00:24:47.760 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:47.760 TCP transport: 00:24:47.760 polls: 12447 00:24:47.760 idle_polls: 6460 00:24:47.760 sock_completions: 5987 00:24:47.760 nvme_completions: 5047 00:24:47.760 submitted_requests: 7566 00:24:47.760 queued_requests: 1 00:24:47.760 ======================================================== 00:24:47.760 Latency(us) 00:24:47.760 Device Information : IOPS MiB/s Average min max 00:24:47.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1220.80 305.20 108218.61 62298.60 198812.04 00:24:47.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1261.29 315.32 102770.70 42510.83 156681.34 00:24:47.760 ======================================================== 00:24:47.760 Total : 2482.09 620.52 105450.22 42510.83 198812.04 00:24:47.760 00:24:47.760 15:05:33 -- host/perf.sh@66 -- # sync 00:24:47.760 15:05:33 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:48.039 15:05:33 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:24:48.039 15:05:33 -- host/perf.sh@71 -- # '[' -n 0000:82:00.0 ']' 00:24:48.040 15:05:33 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:24:52.216 15:05:37 -- host/perf.sh@72 -- # ls_guid=99243817-5b24-476c-b210-e7582f87b2c2 00:24:52.216 15:05:37 -- host/perf.sh@73 -- # get_lvs_free_mb 99243817-5b24-476c-b210-e7582f87b2c2 00:24:52.216 15:05:37 -- common/autotest_common.sh@1350 -- # local lvs_uuid=99243817-5b24-476c-b210-e7582f87b2c2 00:24:52.216 15:05:37 -- common/autotest_common.sh@1351 -- # local lvs_info 00:24:52.216 15:05:37 -- common/autotest_common.sh@1352 -- # local fc 00:24:52.216 15:05:37 -- common/autotest_common.sh@1353 -- # local cs 00:24:52.216 15:05:37 -- common/autotest_common.sh@1354 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:52.216 15:05:37 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:24:52.216 { 00:24:52.216 "uuid": "99243817-5b24-476c-b210-e7582f87b2c2", 00:24:52.216 "name": "lvs_0", 00:24:52.216 "base_bdev": "Nvme0n1", 00:24:52.216 "total_data_clusters": 238234, 00:24:52.216 "free_clusters": 238234, 00:24:52.216 "block_size": 512, 00:24:52.216 "cluster_size": 4194304 00:24:52.216 } 00:24:52.216 ]' 00:24:52.216 15:05:37 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="99243817-5b24-476c-b210-e7582f87b2c2") .free_clusters' 00:24:52.216 15:05:37 -- common/autotest_common.sh@1355 -- # fc=238234 00:24:52.216 15:05:37 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="99243817-5b24-476c-b210-e7582f87b2c2") .cluster_size' 00:24:52.216 15:05:37 -- common/autotest_common.sh@1356 -- # cs=4194304 00:24:52.216 15:05:37 -- common/autotest_common.sh@1359 -- # free_mb=952936 00:24:52.216 15:05:37 -- common/autotest_common.sh@1360 -- # echo 952936 00:24:52.216 952936 00:24:52.216 15:05:37 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:24:52.216 15:05:37 -- host/perf.sh@78 -- # free_mb=20480 00:24:52.216 15:05:37 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 99243817-5b24-476c-b210-e7582f87b2c2 lbd_0 20480 00:24:52.216 15:05:37 -- host/perf.sh@80 -- # lb_guid=db008602-9198-49d2-a7fa-468f55b9e364 00:24:52.216 15:05:37 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore db008602-9198-49d2-a7fa-468f55b9e364 lvs_n_0 00:24:53.148 15:05:38 -- host/perf.sh@83 -- # ls_nested_guid=9674ae7f-716c-43c4-a791-e2df5f3034b3 00:24:53.148 15:05:38 -- host/perf.sh@84 -- # get_lvs_free_mb 9674ae7f-716c-43c4-a791-e2df5f3034b3 00:24:53.148 15:05:38 -- common/autotest_common.sh@1350 -- # local lvs_uuid=9674ae7f-716c-43c4-a791-e2df5f3034b3 00:24:53.148 15:05:38 -- common/autotest_common.sh@1351 -- # local lvs_info 00:24:53.148 15:05:38 -- common/autotest_common.sh@1352 -- # local fc 00:24:53.148 15:05:38 -- common/autotest_common.sh@1353 -- # local cs 00:24:53.148 15:05:38 -- common/autotest_common.sh@1354 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:53.404 15:05:38 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:24:53.404 { 00:24:53.404 "uuid": "99243817-5b24-476c-b210-e7582f87b2c2", 00:24:53.404 "name": "lvs_0", 00:24:53.404 "base_bdev": "Nvme0n1", 00:24:53.404 "total_data_clusters": 238234, 00:24:53.404 "free_clusters": 233114, 00:24:53.404 "block_size": 512, 00:24:53.404 "cluster_size": 4194304 00:24:53.404 }, 00:24:53.404 { 00:24:53.404 "uuid": "9674ae7f-716c-43c4-a791-e2df5f3034b3", 00:24:53.404 "name": "lvs_n_0", 00:24:53.404 "base_bdev": "db008602-9198-49d2-a7fa-468f55b9e364", 00:24:53.404 "total_data_clusters": 5114, 00:24:53.404 "free_clusters": 5114, 00:24:53.404 "block_size": 512, 00:24:53.404 "cluster_size": 4194304 00:24:53.404 } 00:24:53.404 ]' 00:24:53.404 15:05:38 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="9674ae7f-716c-43c4-a791-e2df5f3034b3") .free_clusters' 00:24:53.404 15:05:38 -- common/autotest_common.sh@1355 -- # fc=5114 00:24:53.404 15:05:38 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="9674ae7f-716c-43c4-a791-e2df5f3034b3") .cluster_size' 00:24:53.404 15:05:38 -- common/autotest_common.sh@1356 -- # cs=4194304 00:24:53.404 15:05:38 -- common/autotest_common.sh@1359 -- # free_mb=20456 00:24:53.404 15:05:38 -- common/autotest_common.sh@1360 -- # echo 20456 00:24:53.404 20456 00:24:53.404 15:05:38 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:24:53.404 15:05:38 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9674ae7f-716c-43c4-a791-e2df5f3034b3 lbd_nest_0 20456 00:24:53.661 15:05:39 -- host/perf.sh@88 -- # lb_nested_guid=6d21bda3-0a46-4751-bc80-3c8da33c7a33 00:24:53.661 15:05:39 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:53.918 15:05:39 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:24:53.918 15:05:39 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 6d21bda3-0a46-4751-bc80-3c8da33c7a33 00:24:54.175 15:05:39 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:54.462 15:05:39 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:24:54.462 15:05:39 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:24:54.462 15:05:39 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:24:54.462 15:05:39 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:54.462 15:05:39 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:54.462 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.646 Initializing NVMe Controllers 00:25:06.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:06.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:06.646 Initialization complete. Launching workers. 00:25:06.646 ======================================================== 00:25:06.646 Latency(us) 00:25:06.646 Device Information : IOPS MiB/s Average min max 00:25:06.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.70 0.02 21965.41 182.18 48349.24 00:25:06.646 ======================================================== 00:25:06.646 Total : 45.70 0.02 21965.41 182.18 48349.24 00:25:06.646 00:25:06.646 15:05:50 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:06.646 15:05:50 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:06.646 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.642 Initializing NVMe Controllers 00:25:16.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:16.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:16.642 Initialization complete. Launching workers. 00:25:16.642 ======================================================== 00:25:16.642 Latency(us) 00:25:16.642 Device Information : IOPS MiB/s Average min max 00:25:16.642 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.80 10.10 12383.72 4973.78 50877.32 00:25:16.642 ======================================================== 00:25:16.642 Total : 80.80 10.10 12383.72 4973.78 50877.32 00:25:16.642 00:25:16.642 15:06:00 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:16.642 15:06:00 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:16.642 15:06:00 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:16.642 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.639 Initializing NVMe Controllers 00:25:26.639 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:26.639 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:26.639 Initialization complete. Launching workers. 00:25:26.639 ======================================================== 00:25:26.639 Latency(us) 00:25:26.639 Device Information : IOPS MiB/s Average min max 00:25:26.639 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7614.06 3.72 4203.43 293.67 11056.40 00:25:26.639 ======================================================== 00:25:26.639 Total : 7614.06 3.72 4203.43 293.67 11056.40 00:25:26.639 00:25:26.639 15:06:11 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:26.639 15:06:11 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:26.639 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.616 Initializing NVMe Controllers 00:25:36.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:36.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:36.616 Initialization complete. Launching workers. 00:25:36.616 ======================================================== 00:25:36.616 Latency(us) 00:25:36.616 Device Information : IOPS MiB/s Average min max 00:25:36.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2872.20 359.03 11147.82 895.17 31502.22 00:25:36.616 ======================================================== 00:25:36.616 Total : 2872.20 359.03 11147.82 895.17 31502.22 00:25:36.616 00:25:36.616 15:06:21 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:36.616 15:06:21 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:36.616 15:06:21 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:36.616 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.651 Initializing NVMe Controllers 00:25:46.651 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:46.651 Controller IO queue size 128, less than required. 00:25:46.651 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:46.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:46.651 Initialization complete. Launching workers. 00:25:46.651 ======================================================== 00:25:46.651 Latency(us) 00:25:46.651 Device Information : IOPS MiB/s Average min max 00:25:46.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11934.50 5.83 10726.70 1581.64 30139.44 00:25:46.651 ======================================================== 00:25:46.651 Total : 11934.50 5.83 10726.70 1581.64 30139.44 00:25:46.651 00:25:46.651 15:06:31 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:46.651 15:06:31 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:46.651 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.630 Initializing NVMe Controllers 00:25:56.630 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:56.630 Controller IO queue size 128, less than required. 00:25:56.630 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:56.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:56.630 Initialization complete. Launching workers. 00:25:56.630 ======================================================== 00:25:56.630 Latency(us) 00:25:56.630 Device Information : IOPS MiB/s Average min max 00:25:56.630 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1218.74 152.34 105421.81 26960.63 190660.26 00:25:56.630 ======================================================== 00:25:56.630 Total : 1218.74 152.34 105421.81 26960.63 190660.26 00:25:56.630 00:25:56.630 15:06:42 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:56.630 15:06:42 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6d21bda3-0a46-4751-bc80-3c8da33c7a33 00:25:57.568 15:06:43 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:25:57.826 15:06:43 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete db008602-9198-49d2-a7fa-468f55b9e364 00:25:58.084 15:06:43 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:25:58.342 15:06:43 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:58.342 15:06:43 -- host/perf.sh@114 -- # nvmftestfini 00:25:58.342 15:06:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:58.342 15:06:43 -- nvmf/common.sh@117 -- # sync 00:25:58.342 15:06:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:58.342 15:06:43 -- nvmf/common.sh@120 -- # set +e 00:25:58.342 15:06:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:58.342 15:06:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:58.342 rmmod nvme_tcp 00:25:58.342 rmmod nvme_fabrics 00:25:58.342 rmmod nvme_keyring 00:25:58.342 15:06:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:58.342 15:06:43 -- nvmf/common.sh@124 -- # set -e 00:25:58.342 15:06:43 -- nvmf/common.sh@125 -- # return 0 00:25:58.342 15:06:43 -- nvmf/common.sh@478 -- # '[' -n 3853181 ']' 00:25:58.342 15:06:43 -- nvmf/common.sh@479 -- # killprocess 3853181 00:25:58.342 15:06:43 -- common/autotest_common.sh@936 -- # '[' -z 3853181 ']' 00:25:58.342 15:06:43 -- common/autotest_common.sh@940 -- # kill -0 3853181 00:25:58.342 15:06:43 -- common/autotest_common.sh@941 -- # uname 00:25:58.342 15:06:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:58.342 15:06:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3853181 00:25:58.342 15:06:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:58.342 15:06:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:58.342 15:06:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3853181' 00:25:58.342 killing process with pid 3853181 00:25:58.342 15:06:43 -- common/autotest_common.sh@955 -- # kill 3853181 00:25:58.342 15:06:43 -- common/autotest_common.sh@960 -- # wait 3853181 00:26:00.246 15:06:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:00.246 15:06:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:00.246 15:06:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:00.246 15:06:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:00.246 15:06:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:00.246 15:06:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.246 15:06:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:00.246 15:06:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.156 15:06:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:02.156 00:26:02.156 real 1m31.230s 00:26:02.156 user 5m35.659s 00:26:02.156 sys 0m18.204s 00:26:02.156 15:06:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:02.156 15:06:47 -- common/autotest_common.sh@10 -- # set +x 00:26:02.156 ************************************ 00:26:02.156 END TEST nvmf_perf 00:26:02.156 ************************************ 00:26:02.156 15:06:47 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:02.156 15:06:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:02.156 15:06:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:02.156 15:06:47 -- common/autotest_common.sh@10 -- # set +x 00:26:02.156 ************************************ 00:26:02.156 START TEST nvmf_fio_host 00:26:02.156 ************************************ 00:26:02.156 15:06:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:02.156 * Looking for test storage... 00:26:02.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:02.156 15:06:47 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.156 15:06:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.156 15:06:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.156 15:06:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.156 15:06:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.156 15:06:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.156 15:06:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.156 15:06:47 -- paths/export.sh@5 -- # export PATH 00:26:02.156 15:06:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.156 15:06:47 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.156 15:06:47 -- nvmf/common.sh@7 -- # uname -s 00:26:02.156 15:06:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.156 15:06:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.156 15:06:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.156 15:06:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.157 15:06:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.157 15:06:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.157 15:06:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.157 15:06:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.157 15:06:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.157 15:06:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.157 15:06:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:02.157 15:06:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:02.157 15:06:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.157 15:06:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.157 15:06:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.157 15:06:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.157 15:06:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.157 15:06:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.157 15:06:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.157 15:06:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.157 15:06:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.157 15:06:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.157 15:06:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.157 15:06:47 -- paths/export.sh@5 -- # export PATH 00:26:02.157 15:06:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.157 15:06:47 -- nvmf/common.sh@47 -- # : 0 00:26:02.157 15:06:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:02.157 15:06:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:02.157 15:06:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.157 15:06:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.157 15:06:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.157 15:06:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:02.157 15:06:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:02.157 15:06:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:02.157 15:06:47 -- host/fio.sh@12 -- # nvmftestinit 00:26:02.157 15:06:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:02.157 15:06:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.157 15:06:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:02.157 15:06:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:02.157 15:06:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:02.157 15:06:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.157 15:06:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:02.157 15:06:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.157 15:06:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:02.157 15:06:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:02.157 15:06:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:02.157 15:06:47 -- common/autotest_common.sh@10 -- # set +x 00:26:04.061 15:06:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:04.062 15:06:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:04.062 15:06:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:04.062 15:06:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:04.062 15:06:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:04.062 15:06:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:04.062 15:06:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:04.062 15:06:49 -- nvmf/common.sh@295 -- # net_devs=() 00:26:04.062 15:06:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:04.062 15:06:49 -- nvmf/common.sh@296 -- # e810=() 00:26:04.062 15:06:49 -- nvmf/common.sh@296 -- # local -ga e810 00:26:04.062 15:06:49 -- nvmf/common.sh@297 -- # x722=() 00:26:04.062 15:06:49 -- nvmf/common.sh@297 -- # local -ga x722 00:26:04.062 15:06:49 -- nvmf/common.sh@298 -- # mlx=() 00:26:04.062 15:06:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:04.062 15:06:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:04.062 15:06:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:04.062 15:06:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:04.062 15:06:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:04.062 15:06:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:04.062 15:06:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:04.062 15:06:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:04.062 15:06:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:04.062 15:06:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:04.062 15:06:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:04.062 15:06:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:04.062 15:06:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:04.062 15:06:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:04.062 15:06:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:04.062 15:06:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:04.062 15:06:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:04.062 15:06:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:04.062 15:06:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.062 15:06:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:04.062 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:04.062 15:06:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:04.062 15:06:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:04.062 15:06:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.062 15:06:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.062 15:06:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:04.062 15:06:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.062 15:06:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:04.062 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:04.062 15:06:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:04.062 15:06:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:04.062 15:06:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.062 15:06:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.062 15:06:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:04.062 15:06:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:04.062 15:06:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:04.062 15:06:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:04.062 15:06:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.062 15:06:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.062 15:06:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:04.062 15:06:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.062 15:06:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:04.062 Found net devices under 0000:84:00.0: cvl_0_0 00:26:04.062 15:06:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.062 15:06:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.062 15:06:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.062 15:06:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:04.062 15:06:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.062 15:06:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:04.062 Found net devices under 0000:84:00.1: cvl_0_1 00:26:04.062 15:06:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.062 15:06:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:04.062 15:06:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:04.062 15:06:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:04.062 15:06:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:04.062 15:06:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:04.062 15:06:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.062 15:06:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:04.062 15:06:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:04.062 15:06:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:04.062 15:06:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:04.062 15:06:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:04.062 15:06:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:04.062 15:06:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:04.062 15:06:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.062 15:06:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:04.062 15:06:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:04.062 15:06:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:04.062 15:06:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:04.062 15:06:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:04.062 15:06:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:04.062 15:06:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:04.062 15:06:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:04.323 15:06:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:04.323 15:06:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:04.323 15:06:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:04.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:26:04.323 00:26:04.323 --- 10.0.0.2 ping statistics --- 00:26:04.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.323 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:26:04.323 15:06:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:04.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:26:04.323 00:26:04.323 --- 10.0.0.1 ping statistics --- 00:26:04.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.323 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:26:04.323 15:06:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.323 15:06:49 -- nvmf/common.sh@411 -- # return 0 00:26:04.323 15:06:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:04.323 15:06:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:04.323 15:06:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:04.323 15:06:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:04.323 15:06:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:04.323 15:06:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:04.323 15:06:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:04.323 15:06:49 -- host/fio.sh@14 -- # [[ y != y ]] 00:26:04.323 15:06:49 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:26:04.323 15:06:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:04.323 15:06:49 -- common/autotest_common.sh@10 -- # set +x 00:26:04.323 15:06:49 -- host/fio.sh@22 -- # nvmfpid=3865916 00:26:04.323 15:06:49 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:04.323 15:06:49 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:04.323 15:06:49 -- host/fio.sh@26 -- # waitforlisten 3865916 00:26:04.323 15:06:49 -- common/autotest_common.sh@817 -- # '[' -z 3865916 ']' 00:26:04.323 15:06:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.323 15:06:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:04.323 15:06:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.323 15:06:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:04.323 15:06:49 -- common/autotest_common.sh@10 -- # set +x 00:26:04.323 [2024-04-26 15:06:49.916278] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:26:04.323 [2024-04-26 15:06:49.916399] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:04.323 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.323 [2024-04-26 15:06:49.955717] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:04.323 [2024-04-26 15:06:49.982411] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:04.582 [2024-04-26 15:06:50.080212] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:04.582 [2024-04-26 15:06:50.080297] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:04.582 [2024-04-26 15:06:50.080312] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:04.582 [2024-04-26 15:06:50.080323] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:04.582 [2024-04-26 15:06:50.080333] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:04.582 [2024-04-26 15:06:50.080401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.582 [2024-04-26 15:06:50.080463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:04.582 [2024-04-26 15:06:50.080488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:04.582 [2024-04-26 15:06:50.080490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.582 15:06:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:04.582 15:06:50 -- common/autotest_common.sh@850 -- # return 0 00:26:04.582 15:06:50 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:04.582 15:06:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.582 15:06:50 -- common/autotest_common.sh@10 -- # set +x 00:26:04.582 [2024-04-26 15:06:50.213919] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.582 15:06:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.582 15:06:50 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:26:04.582 15:06:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:04.582 15:06:50 -- common/autotest_common.sh@10 -- # set +x 00:26:04.582 15:06:50 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:04.582 15:06:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.582 15:06:50 -- common/autotest_common.sh@10 -- # set +x 00:26:04.582 Malloc1 00:26:04.582 15:06:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.582 15:06:50 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:04.582 15:06:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.582 15:06:50 -- common/autotest_common.sh@10 -- # set +x 00:26:04.582 15:06:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.582 15:06:50 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:04.582 15:06:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.582 15:06:50 -- common/autotest_common.sh@10 -- # set +x 00:26:04.582 15:06:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.582 15:06:50 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:04.582 15:06:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.582 15:06:50 -- common/autotest_common.sh@10 -- # set +x 00:26:04.582 [2024-04-26 15:06:50.295594] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.582 15:06:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.582 15:06:50 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:04.582 15:06:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.582 15:06:50 -- common/autotest_common.sh@10 -- # set +x 00:26:04.582 15:06:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.582 15:06:50 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:04.582 15:06:50 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:04.582 15:06:50 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:04.582 15:06:50 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:26:04.582 15:06:50 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:04.582 15:06:50 -- common/autotest_common.sh@1325 -- # local sanitizers 00:26:04.582 15:06:50 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:04.582 15:06:50 -- common/autotest_common.sh@1327 -- # shift 00:26:04.582 15:06:50 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:26:04.582 15:06:50 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:04.582 15:06:50 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:04.582 15:06:50 -- common/autotest_common.sh@1331 -- # grep libasan 00:26:04.582 15:06:50 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:04.842 15:06:50 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:04.842 15:06:50 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:04.842 15:06:50 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:04.842 15:06:50 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:04.842 15:06:50 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:26:04.842 15:06:50 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:04.842 15:06:50 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:04.842 15:06:50 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:04.842 15:06:50 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:04.842 15:06:50 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:04.842 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:04.842 fio-3.35 00:26:04.842 Starting 1 thread 00:26:04.842 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.377 00:26:07.377 test: (groupid=0, jobs=1): err= 0: pid=3866133: Fri Apr 26 15:06:52 2024 00:26:07.377 read: IOPS=8864, BW=34.6MiB/s (36.3MB/s)(69.5MiB/2006msec) 00:26:07.377 slat (usec): min=2, max=138, avg= 2.88, stdev= 2.26 00:26:07.377 clat (usec): min=2613, max=14010, avg=7914.58, stdev=625.62 00:26:07.377 lat (usec): min=2636, max=14013, avg=7917.46, stdev=625.55 00:26:07.377 clat percentiles (usec): 00:26:07.377 | 1.00th=[ 6521], 5.00th=[ 6980], 10.00th=[ 7177], 20.00th=[ 7439], 00:26:07.377 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8094], 00:26:07.377 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8848], 00:26:07.377 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[11863], 99.95th=[12780], 00:26:07.377 | 99.99th=[13960] 00:26:07.377 bw ( KiB/s): min=33888, max=36440, per=99.91%, avg=35426.00, stdev=1085.38, samples=4 00:26:07.377 iops : min= 8472, max= 9110, avg=8856.50, stdev=271.34, samples=4 00:26:07.377 write: IOPS=8879, BW=34.7MiB/s (36.4MB/s)(69.6MiB/2006msec); 0 zone resets 00:26:07.377 slat (usec): min=2, max=119, avg= 3.06, stdev= 1.78 00:26:07.377 clat (usec): min=1353, max=12545, avg=6421.03, stdev=536.48 00:26:07.377 lat (usec): min=1362, max=12548, avg=6424.09, stdev=536.42 00:26:07.377 clat percentiles (usec): 00:26:07.377 | 1.00th=[ 5276], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 5997], 00:26:07.377 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6521], 00:26:07.377 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 7046], 95.00th=[ 7177], 00:26:07.377 | 99.00th=[ 7570], 99.50th=[ 7767], 99.90th=[10683], 99.95th=[11600], 00:26:07.377 | 99.99th=[12518] 00:26:07.377 bw ( KiB/s): min=34896, max=35856, per=99.97%, avg=35508.00, stdev=431.78, samples=4 00:26:07.377 iops : min= 8724, max= 8964, avg=8877.00, stdev=107.94, samples=4 00:26:07.377 lat (msec) : 2=0.02%, 4=0.10%, 10=99.72%, 20=0.16% 00:26:07.377 cpu : usr=67.18%, sys=30.17%, ctx=99, majf=0, minf=29 00:26:07.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:07.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:07.378 issued rwts: total=17782,17812,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:07.378 00:26:07.378 Run status group 0 (all jobs): 00:26:07.378 READ: bw=34.6MiB/s (36.3MB/s), 34.6MiB/s-34.6MiB/s (36.3MB/s-36.3MB/s), io=69.5MiB (72.8MB), run=2006-2006msec 00:26:07.378 WRITE: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.6MiB (73.0MB), run=2006-2006msec 00:26:07.378 15:06:52 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:07.378 15:06:52 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:07.378 15:06:52 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:26:07.378 15:06:52 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:07.378 15:06:52 -- common/autotest_common.sh@1325 -- # local sanitizers 00:26:07.378 15:06:52 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:07.378 15:06:52 -- common/autotest_common.sh@1327 -- # shift 00:26:07.378 15:06:52 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:26:07.378 15:06:52 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:07.378 15:06:52 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:07.378 15:06:52 -- common/autotest_common.sh@1331 -- # grep libasan 00:26:07.378 15:06:52 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:07.378 15:06:53 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:07.378 15:06:53 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:07.378 15:06:53 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:07.378 15:06:53 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:07.378 15:06:53 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:26:07.378 15:06:53 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:07.378 15:06:53 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:07.378 15:06:53 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:07.378 15:06:53 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:07.378 15:06:53 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:07.705 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:07.705 fio-3.35 00:26:07.705 Starting 1 thread 00:26:07.705 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.236 00:26:10.236 test: (groupid=0, jobs=1): err= 0: pid=3866465: Fri Apr 26 15:06:55 2024 00:26:10.236 read: IOPS=8029, BW=125MiB/s (132MB/s)(252MiB/2005msec) 00:26:10.236 slat (usec): min=3, max=134, avg= 4.36, stdev= 2.65 00:26:10.236 clat (usec): min=2184, max=17240, avg=9257.72, stdev=2303.50 00:26:10.236 lat (usec): min=2188, max=17244, avg=9262.08, stdev=2303.52 00:26:10.236 clat percentiles (usec): 00:26:10.236 | 1.00th=[ 4752], 5.00th=[ 5735], 10.00th=[ 6325], 20.00th=[ 7242], 00:26:10.236 | 30.00th=[ 7898], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9765], 00:26:10.236 | 70.00th=[10552], 80.00th=[11076], 90.00th=[12256], 95.00th=[13304], 00:26:10.236 | 99.00th=[15270], 99.50th=[15926], 99.90th=[16909], 99.95th=[17171], 00:26:10.236 | 99.99th=[17171] 00:26:10.236 bw ( KiB/s): min=59008, max=71808, per=51.33%, avg=65944.00, stdev=6431.04, samples=4 00:26:10.236 iops : min= 3688, max= 4488, avg=4121.50, stdev=401.94, samples=4 00:26:10.236 write: IOPS=4679, BW=73.1MiB/s (76.7MB/s)(135MiB/1846msec); 0 zone resets 00:26:10.236 slat (usec): min=30, max=163, avg=38.74, stdev= 7.04 00:26:10.236 clat (usec): min=5084, max=19194, avg=11686.29, stdev=2027.62 00:26:10.236 lat (usec): min=5118, max=19227, avg=11725.03, stdev=2027.55 00:26:10.236 clat percentiles (usec): 00:26:10.236 | 1.00th=[ 7439], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[10028], 00:26:10.236 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:26:10.236 | 70.00th=[12518], 80.00th=[13173], 90.00th=[14615], 95.00th=[15533], 00:26:10.236 | 99.00th=[16712], 99.50th=[17171], 99.90th=[18482], 99.95th=[18744], 00:26:10.236 | 99.99th=[19268] 00:26:10.236 bw ( KiB/s): min=62080, max=76000, per=91.58%, avg=68576.00, stdev=7145.25, samples=4 00:26:10.236 iops : min= 3880, max= 4750, avg=4286.00, stdev=446.58, samples=4 00:26:10.236 lat (msec) : 4=0.19%, 10=47.48%, 20=52.33% 00:26:10.236 cpu : usr=82.00%, sys=16.51%, ctx=47, majf=0, minf=50 00:26:10.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:26:10.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:10.236 issued rwts: total=16099,8639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.236 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:10.236 00:26:10.236 Run status group 0 (all jobs): 00:26:10.236 READ: bw=125MiB/s (132MB/s), 125MiB/s-125MiB/s (132MB/s-132MB/s), io=252MiB (264MB), run=2005-2005msec 00:26:10.236 WRITE: bw=73.1MiB/s (76.7MB/s), 73.1MiB/s-73.1MiB/s (76.7MB/s-76.7MB/s), io=135MiB (142MB), run=1846-1846msec 00:26:10.236 15:06:55 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:10.237 15:06:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.237 15:06:55 -- common/autotest_common.sh@10 -- # set +x 00:26:10.237 15:06:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.237 15:06:55 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:26:10.237 15:06:55 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:26:10.237 15:06:55 -- host/fio.sh@49 -- # get_nvme_bdfs 00:26:10.237 15:06:55 -- common/autotest_common.sh@1499 -- # bdfs=() 00:26:10.237 15:06:55 -- common/autotest_common.sh@1499 -- # local bdfs 00:26:10.237 15:06:55 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:10.237 15:06:55 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:10.237 15:06:55 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:26:10.237 15:06:55 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:26:10.237 15:06:55 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:82:00.0 00:26:10.237 15:06:55 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 -i 10.0.0.2 00:26:10.237 15:06:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.237 15:06:55 -- common/autotest_common.sh@10 -- # set +x 00:26:12.762 Nvme0n1 00:26:12.762 15:06:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.762 15:06:58 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:26:12.762 15:06:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.762 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:26:16.040 15:07:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.041 15:07:01 -- host/fio.sh@51 -- # ls_guid=089a301b-c751-43cd-ab99-7b82283db840 00:26:16.041 15:07:01 -- host/fio.sh@52 -- # get_lvs_free_mb 089a301b-c751-43cd-ab99-7b82283db840 00:26:16.041 15:07:01 -- common/autotest_common.sh@1350 -- # local lvs_uuid=089a301b-c751-43cd-ab99-7b82283db840 00:26:16.041 15:07:01 -- common/autotest_common.sh@1351 -- # local lvs_info 00:26:16.041 15:07:01 -- common/autotest_common.sh@1352 -- # local fc 00:26:16.041 15:07:01 -- common/autotest_common.sh@1353 -- # local cs 00:26:16.041 15:07:01 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:26:16.041 15:07:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.041 15:07:01 -- common/autotest_common.sh@10 -- # set +x 00:26:16.041 15:07:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.041 15:07:01 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:26:16.041 { 00:26:16.041 "uuid": "089a301b-c751-43cd-ab99-7b82283db840", 00:26:16.041 "name": "lvs_0", 00:26:16.041 "base_bdev": "Nvme0n1", 00:26:16.041 "total_data_clusters": 930, 00:26:16.041 "free_clusters": 930, 00:26:16.041 "block_size": 512, 00:26:16.041 "cluster_size": 1073741824 00:26:16.041 } 00:26:16.041 ]' 00:26:16.041 15:07:01 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="089a301b-c751-43cd-ab99-7b82283db840") .free_clusters' 00:26:16.041 15:07:01 -- common/autotest_common.sh@1355 -- # fc=930 00:26:16.041 15:07:01 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="089a301b-c751-43cd-ab99-7b82283db840") .cluster_size' 00:26:16.041 15:07:01 -- common/autotest_common.sh@1356 -- # cs=1073741824 00:26:16.041 15:07:01 -- common/autotest_common.sh@1359 -- # free_mb=952320 00:26:16.041 15:07:01 -- common/autotest_common.sh@1360 -- # echo 952320 00:26:16.041 952320 00:26:16.041 15:07:01 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 952320 00:26:16.041 15:07:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.041 15:07:01 -- common/autotest_common.sh@10 -- # set +x 00:26:16.041 5885d5d8-1cd1-48d4-aedf-07f0212b7c3f 00:26:16.041 15:07:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.041 15:07:01 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:26:16.041 15:07:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.041 15:07:01 -- common/autotest_common.sh@10 -- # set +x 00:26:16.041 15:07:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.041 15:07:01 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:26:16.041 15:07:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.041 15:07:01 -- common/autotest_common.sh@10 -- # set +x 00:26:16.041 15:07:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.041 15:07:01 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:16.041 15:07:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.041 15:07:01 -- common/autotest_common.sh@10 -- # set +x 00:26:16.041 15:07:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.041 15:07:01 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:16.041 15:07:01 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:16.041 15:07:01 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:26:16.041 15:07:01 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:16.041 15:07:01 -- common/autotest_common.sh@1325 -- # local sanitizers 00:26:16.041 15:07:01 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:16.041 15:07:01 -- common/autotest_common.sh@1327 -- # shift 00:26:16.041 15:07:01 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:26:16.041 15:07:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:16.041 15:07:01 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:16.041 15:07:01 -- common/autotest_common.sh@1331 -- # grep libasan 00:26:16.041 15:07:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:16.041 15:07:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:16.041 15:07:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:16.041 15:07:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:16.041 15:07:01 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:16.041 15:07:01 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:26:16.041 15:07:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:16.041 15:07:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:16.041 15:07:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:16.041 15:07:01 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:16.041 15:07:01 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:16.041 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:16.041 fio-3.35 00:26:16.041 Starting 1 thread 00:26:16.041 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.568 00:26:18.568 test: (groupid=0, jobs=1): err= 0: pid=3867598: Fri Apr 26 15:07:04 2024 00:26:18.568 read: IOPS=5982, BW=23.4MiB/s (24.5MB/s)(46.9MiB/2007msec) 00:26:18.568 slat (usec): min=2, max=194, avg= 3.80, stdev= 3.33 00:26:18.568 clat (usec): min=955, max=171547, avg=11715.04, stdev=11656.80 00:26:18.568 lat (usec): min=959, max=171584, avg=11718.84, stdev=11657.16 00:26:18.568 clat percentiles (msec): 00:26:18.568 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:26:18.568 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:26:18.568 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:26:18.568 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:26:18.568 | 99.99th=[ 171] 00:26:18.568 bw ( KiB/s): min=16880, max=26384, per=99.71%, avg=23860.00, stdev=4655.59, samples=4 00:26:18.568 iops : min= 4220, max= 6596, avg=5965.00, stdev=1163.90, samples=4 00:26:18.568 write: IOPS=5967, BW=23.3MiB/s (24.4MB/s)(46.8MiB/2007msec); 0 zone resets 00:26:18.568 slat (usec): min=2, max=164, avg= 3.98, stdev= 2.92 00:26:18.568 clat (usec): min=336, max=169015, avg=9545.82, stdev=10917.90 00:26:18.568 lat (usec): min=340, max=169022, avg=9549.81, stdev=10918.31 00:26:18.568 clat percentiles (msec): 00:26:18.568 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:26:18.568 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:26:18.568 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:26:18.568 | 99.00th=[ 11], 99.50th=[ 17], 99.90th=[ 169], 99.95th=[ 169], 00:26:18.568 | 99.99th=[ 169] 00:26:18.568 bw ( KiB/s): min=17896, max=26248, per=99.93%, avg=23852.00, stdev=3987.83, samples=4 00:26:18.568 iops : min= 4474, max= 6562, avg=5963.00, stdev=996.96, samples=4 00:26:18.568 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:26:18.568 lat (msec) : 2=0.03%, 4=0.08%, 10=55.23%, 20=44.11%, 250=0.53% 00:26:18.568 cpu : usr=65.70%, sys=32.35%, ctx=66, majf=0, minf=34 00:26:18.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:26:18.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:18.568 issued rwts: total=12007,11976,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.568 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:18.568 00:26:18.568 Run status group 0 (all jobs): 00:26:18.568 READ: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=46.9MiB (49.2MB), run=2007-2007msec 00:26:18.568 WRITE: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=46.8MiB (49.1MB), run=2007-2007msec 00:26:18.568 15:07:04 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:18.568 15:07:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.568 15:07:04 -- common/autotest_common.sh@10 -- # set +x 00:26:18.568 15:07:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.568 15:07:04 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:26:18.568 15:07:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.568 15:07:04 -- common/autotest_common.sh@10 -- # set +x 00:26:19.502 15:07:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.502 15:07:04 -- host/fio.sh@62 -- # ls_nested_guid=ca62f1ad-5ce9-46d8-8944-0f179745b51d 00:26:19.502 15:07:04 -- host/fio.sh@63 -- # get_lvs_free_mb ca62f1ad-5ce9-46d8-8944-0f179745b51d 00:26:19.502 15:07:04 -- common/autotest_common.sh@1350 -- # local lvs_uuid=ca62f1ad-5ce9-46d8-8944-0f179745b51d 00:26:19.502 15:07:04 -- common/autotest_common.sh@1351 -- # local lvs_info 00:26:19.502 15:07:04 -- common/autotest_common.sh@1352 -- # local fc 00:26:19.502 15:07:04 -- common/autotest_common.sh@1353 -- # local cs 00:26:19.502 15:07:04 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:26:19.502 15:07:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.502 15:07:04 -- common/autotest_common.sh@10 -- # set +x 00:26:19.502 15:07:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.502 15:07:04 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:26:19.502 { 00:26:19.502 "uuid": "089a301b-c751-43cd-ab99-7b82283db840", 00:26:19.502 "name": "lvs_0", 00:26:19.502 "base_bdev": "Nvme0n1", 00:26:19.502 "total_data_clusters": 930, 00:26:19.502 "free_clusters": 0, 00:26:19.502 "block_size": 512, 00:26:19.502 "cluster_size": 1073741824 00:26:19.502 }, 00:26:19.502 { 00:26:19.502 "uuid": "ca62f1ad-5ce9-46d8-8944-0f179745b51d", 00:26:19.502 "name": "lvs_n_0", 00:26:19.502 "base_bdev": "5885d5d8-1cd1-48d4-aedf-07f0212b7c3f", 00:26:19.502 "total_data_clusters": 237847, 00:26:19.502 "free_clusters": 237847, 00:26:19.502 "block_size": 512, 00:26:19.502 "cluster_size": 4194304 00:26:19.502 } 00:26:19.502 ]' 00:26:19.502 15:07:04 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="ca62f1ad-5ce9-46d8-8944-0f179745b51d") .free_clusters' 00:26:19.502 15:07:05 -- common/autotest_common.sh@1355 -- # fc=237847 00:26:19.502 15:07:05 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="ca62f1ad-5ce9-46d8-8944-0f179745b51d") .cluster_size' 00:26:19.502 15:07:05 -- common/autotest_common.sh@1356 -- # cs=4194304 00:26:19.502 15:07:05 -- common/autotest_common.sh@1359 -- # free_mb=951388 00:26:19.502 15:07:05 -- common/autotest_common.sh@1360 -- # echo 951388 00:26:19.502 951388 00:26:19.502 15:07:05 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:26:19.502 15:07:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.502 15:07:05 -- common/autotest_common.sh@10 -- # set +x 00:26:19.760 999cbd99-7660-442b-8e42-514f36bddd2c 00:26:19.760 15:07:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.760 15:07:05 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:26:19.760 15:07:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.760 15:07:05 -- common/autotest_common.sh@10 -- # set +x 00:26:19.760 15:07:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.760 15:07:05 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:26:19.760 15:07:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.760 15:07:05 -- common/autotest_common.sh@10 -- # set +x 00:26:19.760 15:07:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.760 15:07:05 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:19.760 15:07:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.760 15:07:05 -- common/autotest_common.sh@10 -- # set +x 00:26:20.017 15:07:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.017 15:07:05 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:20.017 15:07:05 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:20.017 15:07:05 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:26:20.017 15:07:05 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:20.017 15:07:05 -- common/autotest_common.sh@1325 -- # local sanitizers 00:26:20.017 15:07:05 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:20.017 15:07:05 -- common/autotest_common.sh@1327 -- # shift 00:26:20.017 15:07:05 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:26:20.017 15:07:05 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:20.017 15:07:05 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:20.017 15:07:05 -- common/autotest_common.sh@1331 -- # grep libasan 00:26:20.017 15:07:05 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:20.017 15:07:05 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:20.017 15:07:05 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:20.017 15:07:05 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:20.017 15:07:05 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:20.017 15:07:05 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:26:20.017 15:07:05 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:20.017 15:07:05 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:20.017 15:07:05 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:20.017 15:07:05 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:20.017 15:07:05 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:20.017 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:20.017 fio-3.35 00:26:20.017 Starting 1 thread 00:26:20.017 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.541 00:26:22.541 test: (groupid=0, jobs=1): err= 0: pid=3868074: Fri Apr 26 15:07:08 2024 00:26:22.541 read: IOPS=5912, BW=23.1MiB/s (24.2MB/s)(46.4MiB/2008msec) 00:26:22.541 slat (usec): min=2, max=149, avg= 3.02, stdev= 2.68 00:26:22.541 clat (usec): min=4279, max=20219, avg=11873.48, stdev=1040.72 00:26:22.541 lat (usec): min=4289, max=20221, avg=11876.50, stdev=1040.60 00:26:22.541 clat percentiles (usec): 00:26:22.541 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:26:22.541 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:26:22.541 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13042], 95.00th=[13435], 00:26:22.541 | 99.00th=[14091], 99.50th=[14353], 99.90th=[19006], 99.95th=[20055], 00:26:22.541 | 99.99th=[20317] 00:26:22.541 bw ( KiB/s): min=22264, max=24152, per=99.85%, avg=23614.00, stdev=904.75, samples=4 00:26:22.541 iops : min= 5566, max= 6038, avg=5903.50, stdev=226.19, samples=4 00:26:22.541 write: IOPS=5912, BW=23.1MiB/s (24.2MB/s)(46.4MiB/2008msec); 0 zone resets 00:26:22.541 slat (usec): min=2, max=126, avg= 3.23, stdev= 2.55 00:26:22.541 clat (usec): min=2111, max=17366, avg=9577.90, stdev=872.43 00:26:22.541 lat (usec): min=2118, max=17369, avg=9581.13, stdev=872.37 00:26:22.541 clat percentiles (usec): 00:26:22.541 | 1.00th=[ 7504], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 8979], 00:26:22.541 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:26:22.541 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10552], 95.00th=[10945], 00:26:22.541 | 99.00th=[11469], 99.50th=[11731], 99.90th=[14877], 99.95th=[16188], 00:26:22.541 | 99.99th=[17433] 00:26:22.541 bw ( KiB/s): min=23248, max=23808, per=99.88%, avg=23620.00, stdev=253.45, samples=4 00:26:22.541 iops : min= 5812, max= 5952, avg=5905.00, stdev=63.36, samples=4 00:26:22.541 lat (msec) : 4=0.04%, 10=36.49%, 20=63.45%, 50=0.02% 00:26:22.541 cpu : usr=63.53%, sys=34.68%, ctx=85, majf=0, minf=34 00:26:22.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:26:22.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:22.541 issued rwts: total=11872,11872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:22.541 00:26:22.541 Run status group 0 (all jobs): 00:26:22.541 READ: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.4MiB (48.6MB), run=2008-2008msec 00:26:22.541 WRITE: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.4MiB (48.6MB), run=2008-2008msec 00:26:22.541 15:07:08 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:22.541 15:07:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.541 15:07:08 -- common/autotest_common.sh@10 -- # set +x 00:26:22.541 15:07:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.541 15:07:08 -- host/fio.sh@72 -- # sync 00:26:22.541 15:07:08 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:26:22.541 15:07:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.541 15:07:08 -- common/autotest_common.sh@10 -- # set +x 00:26:26.714 15:07:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.714 15:07:11 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:26:26.714 15:07:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.714 15:07:11 -- common/autotest_common.sh@10 -- # set +x 00:26:26.714 15:07:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.714 15:07:11 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:26:26.714 15:07:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.714 15:07:11 -- common/autotest_common.sh@10 -- # set +x 00:26:29.236 15:07:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.236 15:07:14 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:26:29.236 15:07:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.236 15:07:14 -- common/autotest_common.sh@10 -- # set +x 00:26:29.236 15:07:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.236 15:07:14 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:26:29.236 15:07:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.236 15:07:14 -- common/autotest_common.sh@10 -- # set +x 00:26:30.651 15:07:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.651 15:07:16 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:26:30.651 15:07:16 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:26:30.651 15:07:16 -- host/fio.sh@84 -- # nvmftestfini 00:26:30.651 15:07:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:30.651 15:07:16 -- nvmf/common.sh@117 -- # sync 00:26:30.651 15:07:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:30.651 15:07:16 -- nvmf/common.sh@120 -- # set +e 00:26:30.651 15:07:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:30.651 15:07:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:30.651 rmmod nvme_tcp 00:26:30.651 rmmod nvme_fabrics 00:26:30.651 rmmod nvme_keyring 00:26:30.651 15:07:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:30.651 15:07:16 -- nvmf/common.sh@124 -- # set -e 00:26:30.651 15:07:16 -- nvmf/common.sh@125 -- # return 0 00:26:30.651 15:07:16 -- nvmf/common.sh@478 -- # '[' -n 3865916 ']' 00:26:30.651 15:07:16 -- nvmf/common.sh@479 -- # killprocess 3865916 00:26:30.651 15:07:16 -- common/autotest_common.sh@936 -- # '[' -z 3865916 ']' 00:26:30.651 15:07:16 -- common/autotest_common.sh@940 -- # kill -0 3865916 00:26:30.651 15:07:16 -- common/autotest_common.sh@941 -- # uname 00:26:30.651 15:07:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:30.651 15:07:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3865916 00:26:30.651 15:07:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:30.651 15:07:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:30.651 15:07:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3865916' 00:26:30.651 killing process with pid 3865916 00:26:30.651 15:07:16 -- common/autotest_common.sh@955 -- # kill 3865916 00:26:30.651 15:07:16 -- common/autotest_common.sh@960 -- # wait 3865916 00:26:30.908 15:07:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:30.908 15:07:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:30.908 15:07:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:30.908 15:07:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:30.908 15:07:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:30.908 15:07:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.908 15:07:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.908 15:07:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.809 15:07:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:32.809 00:26:32.809 real 0m30.738s 00:26:32.809 user 1m52.429s 00:26:32.809 sys 0m5.589s 00:26:32.809 15:07:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:32.809 15:07:18 -- common/autotest_common.sh@10 -- # set +x 00:26:32.809 ************************************ 00:26:32.809 END TEST nvmf_fio_host 00:26:32.809 ************************************ 00:26:32.809 15:07:18 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:32.809 15:07:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:32.809 15:07:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:32.809 15:07:18 -- common/autotest_common.sh@10 -- # set +x 00:26:33.068 ************************************ 00:26:33.068 START TEST nvmf_failover 00:26:33.068 ************************************ 00:26:33.068 15:07:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:33.068 * Looking for test storage... 00:26:33.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:33.068 15:07:18 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.068 15:07:18 -- nvmf/common.sh@7 -- # uname -s 00:26:33.068 15:07:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.068 15:07:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.068 15:07:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.068 15:07:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.068 15:07:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.068 15:07:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.068 15:07:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.068 15:07:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.068 15:07:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.068 15:07:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.068 15:07:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:33.068 15:07:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:33.068 15:07:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.068 15:07:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.068 15:07:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.068 15:07:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.068 15:07:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.068 15:07:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.068 15:07:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.068 15:07:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.068 15:07:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.068 15:07:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.068 15:07:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.068 15:07:18 -- paths/export.sh@5 -- # export PATH 00:26:33.068 15:07:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.068 15:07:18 -- nvmf/common.sh@47 -- # : 0 00:26:33.068 15:07:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:33.068 15:07:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:33.068 15:07:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.068 15:07:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.068 15:07:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.068 15:07:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:33.068 15:07:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:33.068 15:07:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:33.068 15:07:18 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:33.068 15:07:18 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:33.068 15:07:18 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:33.068 15:07:18 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:33.068 15:07:18 -- host/failover.sh@18 -- # nvmftestinit 00:26:33.068 15:07:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:33.068 15:07:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.068 15:07:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:33.068 15:07:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:33.069 15:07:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:33.069 15:07:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.069 15:07:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:33.069 15:07:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.069 15:07:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:33.069 15:07:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:33.069 15:07:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:33.069 15:07:18 -- common/autotest_common.sh@10 -- # set +x 00:26:34.967 15:07:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:34.967 15:07:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:34.967 15:07:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:34.967 15:07:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:34.967 15:07:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:34.967 15:07:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:34.967 15:07:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:34.967 15:07:20 -- nvmf/common.sh@295 -- # net_devs=() 00:26:34.967 15:07:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:34.967 15:07:20 -- nvmf/common.sh@296 -- # e810=() 00:26:34.967 15:07:20 -- nvmf/common.sh@296 -- # local -ga e810 00:26:34.967 15:07:20 -- nvmf/common.sh@297 -- # x722=() 00:26:34.967 15:07:20 -- nvmf/common.sh@297 -- # local -ga x722 00:26:34.967 15:07:20 -- nvmf/common.sh@298 -- # mlx=() 00:26:34.967 15:07:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:34.967 15:07:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:34.967 15:07:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:34.967 15:07:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:34.967 15:07:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:34.967 15:07:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:34.967 15:07:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:34.967 15:07:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:34.967 15:07:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:34.967 15:07:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:34.967 15:07:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:34.967 15:07:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:34.967 15:07:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:34.967 15:07:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:34.967 15:07:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:34.967 15:07:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:34.967 15:07:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:34.967 15:07:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:34.967 15:07:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:34.967 15:07:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:34.967 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:34.967 15:07:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:34.967 15:07:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:34.967 15:07:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.967 15:07:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.967 15:07:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:34.967 15:07:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:34.967 15:07:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:34.967 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:34.967 15:07:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:34.967 15:07:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:34.967 15:07:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.967 15:07:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.967 15:07:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:34.967 15:07:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:34.967 15:07:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:34.967 15:07:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:34.967 15:07:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:34.967 15:07:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.967 15:07:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:34.967 15:07:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.967 15:07:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:34.967 Found net devices under 0000:84:00.0: cvl_0_0 00:26:34.967 15:07:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.967 15:07:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:34.967 15:07:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.967 15:07:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:34.967 15:07:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.967 15:07:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:34.967 Found net devices under 0000:84:00.1: cvl_0_1 00:26:34.967 15:07:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.967 15:07:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:34.967 15:07:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:34.967 15:07:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:34.967 15:07:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:34.967 15:07:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:34.967 15:07:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:34.967 15:07:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:34.967 15:07:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:34.967 15:07:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:34.967 15:07:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:34.967 15:07:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:34.967 15:07:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:34.967 15:07:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:34.967 15:07:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:34.967 15:07:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:34.967 15:07:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:34.967 15:07:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:34.967 15:07:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:34.967 15:07:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:34.967 15:07:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:34.967 15:07:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:34.967 15:07:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:34.967 15:07:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:34.967 15:07:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:34.967 15:07:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:34.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:34.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:26:34.967 00:26:34.967 --- 10.0.0.2 ping statistics --- 00:26:34.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.967 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:26:34.967 15:07:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:34.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:34.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:26:34.967 00:26:34.967 --- 10.0.0.1 ping statistics --- 00:26:34.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.967 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:26:34.967 15:07:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:34.967 15:07:20 -- nvmf/common.sh@411 -- # return 0 00:26:34.967 15:07:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:34.967 15:07:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:34.967 15:07:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:34.967 15:07:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:34.967 15:07:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:34.967 15:07:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:34.967 15:07:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:34.967 15:07:20 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:34.967 15:07:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:34.967 15:07:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:34.967 15:07:20 -- common/autotest_common.sh@10 -- # set +x 00:26:34.967 15:07:20 -- nvmf/common.sh@470 -- # nvmfpid=3871197 00:26:34.967 15:07:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:34.967 15:07:20 -- nvmf/common.sh@471 -- # waitforlisten 3871197 00:26:34.967 15:07:20 -- common/autotest_common.sh@817 -- # '[' -z 3871197 ']' 00:26:34.967 15:07:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.967 15:07:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:34.967 15:07:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.967 15:07:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:34.967 15:07:20 -- common/autotest_common.sh@10 -- # set +x 00:26:34.968 [2024-04-26 15:07:20.683663] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:26:34.968 [2024-04-26 15:07:20.683751] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.226 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.226 [2024-04-26 15:07:20.729863] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:35.226 [2024-04-26 15:07:20.761602] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:35.226 [2024-04-26 15:07:20.854792] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.226 [2024-04-26 15:07:20.854856] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.226 [2024-04-26 15:07:20.854872] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.226 [2024-04-26 15:07:20.854886] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.226 [2024-04-26 15:07:20.854899] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.226 [2024-04-26 15:07:20.854984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:35.226 [2024-04-26 15:07:20.856051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:35.226 [2024-04-26 15:07:20.856055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.483 15:07:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:35.483 15:07:20 -- common/autotest_common.sh@850 -- # return 0 00:26:35.483 15:07:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:35.483 15:07:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:35.483 15:07:20 -- common/autotest_common.sh@10 -- # set +x 00:26:35.483 15:07:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.483 15:07:20 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:35.740 [2024-04-26 15:07:21.256710] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.740 15:07:21 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:35.998 Malloc0 00:26:35.998 15:07:21 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:36.256 15:07:21 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:36.513 15:07:22 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.769 [2024-04-26 15:07:22.319756] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.769 15:07:22 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:37.026 [2024-04-26 15:07:22.604668] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:37.026 15:07:22 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:37.282 [2024-04-26 15:07:22.893531] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:37.282 15:07:22 -- host/failover.sh@31 -- # bdevperf_pid=3871485 00:26:37.282 15:07:22 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:37.282 15:07:22 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:37.282 15:07:22 -- host/failover.sh@34 -- # waitforlisten 3871485 /var/tmp/bdevperf.sock 00:26:37.282 15:07:22 -- common/autotest_common.sh@817 -- # '[' -z 3871485 ']' 00:26:37.282 15:07:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:37.282 15:07:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:37.282 15:07:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:37.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:37.282 15:07:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:37.282 15:07:22 -- common/autotest_common.sh@10 -- # set +x 00:26:37.540 15:07:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:37.540 15:07:23 -- common/autotest_common.sh@850 -- # return 0 00:26:37.540 15:07:23 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:38.105 NVMe0n1 00:26:38.105 15:07:23 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:38.362 00:26:38.362 15:07:23 -- host/failover.sh@39 -- # run_test_pid=3871618 00:26:38.362 15:07:23 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:38.362 15:07:23 -- host/failover.sh@41 -- # sleep 1 00:26:39.294 15:07:24 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:39.553 [2024-04-26 15:07:25.178735] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae45b0 is same with the state(5) to be set 00:26:39.553 [2024-04-26 15:07:25.178801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae45b0 is same with the state(5) to be set 00:26:39.553 [2024-04-26 15:07:25.178832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae45b0 is same with the state(5) to be set 00:26:39.553 [2024-04-26 15:07:25.178845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae45b0 is same with the state(5) to be set 00:26:39.553 [2024-04-26 15:07:25.178858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae45b0 is same with the state(5) to be set 00:26:39.553 [2024-04-26 15:07:25.178871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae45b0 is same with the state(5) to be set 00:26:39.553 [2024-04-26 15:07:25.178884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae45b0 is same with the state(5) to be set 00:26:39.553 [2024-04-26 15:07:25.178896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae45b0 is same with the state(5) to be set 00:26:39.553 [2024-04-26 15:07:25.178908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae45b0 is same with the state(5) to be set 00:26:39.553 [2024-04-26 15:07:25.178920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae45b0 is same with the state(5) to be set 00:26:39.553 [2024-04-26 15:07:25.178932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae45b0 is same with the state(5) to be set 00:26:39.553 [2024-04-26 15:07:25.178944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae45b0 is same with the state(5) to be set 00:26:39.553 [2024-04-26 15:07:25.178957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae45b0 is same with the state(5) to be set 00:26:39.553 [2024-04-26 15:07:25.178969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae45b0 is same with the state(5) to be set 00:26:39.553 [2024-04-26 15:07:25.178981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae45b0 is same with the state(5) to be set 00:26:39.553 [2024-04-26 15:07:25.178993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae45b0 is same with the state(5) to be set 00:26:39.553 [2024-04-26 15:07:25.179029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae45b0 is same with the state(5) to be set 00:26:39.553 [2024-04-26 15:07:25.179043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae45b0 is same with the state(5) to be set 00:26:39.553 15:07:25 -- host/failover.sh@45 -- # sleep 3 00:26:42.833 15:07:28 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:43.091 00:26:43.091 15:07:28 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:43.350 [2024-04-26 15:07:28.845997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846276] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846490] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 [2024-04-26 15:07:28.846502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4dc0 is same with the state(5) to be set 00:26:43.350 15:07:28 -- host/failover.sh@50 -- # sleep 3 00:26:46.634 15:07:31 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:46.634 [2024-04-26 15:07:32.125299] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.634 15:07:32 -- host/failover.sh@55 -- # sleep 1 00:26:47.572 15:07:33 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:47.871 [2024-04-26 15:07:33.416384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416496] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416508] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416567] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416615] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416638] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416710] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.871 [2024-04-26 15:07:33.416728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.416994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417091] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417214] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 [2024-04-26 15:07:33.417409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188a900 is same with the state(5) to be set 00:26:47.872 15:07:33 -- host/failover.sh@59 -- # wait 3871618 00:26:54.436 0 00:26:54.436 15:07:39 -- host/failover.sh@61 -- # killprocess 3871485 00:26:54.436 15:07:39 -- common/autotest_common.sh@936 -- # '[' -z 3871485 ']' 00:26:54.436 15:07:39 -- common/autotest_common.sh@940 -- # kill -0 3871485 00:26:54.436 15:07:39 -- common/autotest_common.sh@941 -- # uname 00:26:54.436 15:07:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:54.436 15:07:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3871485 00:26:54.436 15:07:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:54.436 15:07:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:54.436 15:07:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3871485' 00:26:54.436 killing process with pid 3871485 00:26:54.436 15:07:39 -- common/autotest_common.sh@955 -- # kill 3871485 00:26:54.436 15:07:39 -- common/autotest_common.sh@960 -- # wait 3871485 00:26:54.436 15:07:39 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:54.436 [2024-04-26 15:07:22.955902] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:26:54.436 [2024-04-26 15:07:22.955986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3871485 ] 00:26:54.436 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.436 [2024-04-26 15:07:22.989033] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:54.436 [2024-04-26 15:07:23.017671] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.436 [2024-04-26 15:07:23.103466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.436 Running I/O for 15 seconds... 00:26:54.436 [2024-04-26 15:07:25.180904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.436 [2024-04-26 15:07:25.180952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.180999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.436 [2024-04-26 15:07:25.181016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.436 [2024-04-26 15:07:25.181060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.436 [2024-04-26 15:07:25.181091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.436 [2024-04-26 15:07:25.181123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.436 [2024-04-26 15:07:25.181153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.436 [2024-04-26 15:07:25.181183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.436 [2024-04-26 15:07:25.181213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.436 [2024-04-26 15:07:25.181243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.436 [2024-04-26 15:07:25.181273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.436 [2024-04-26 15:07:25.181315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.436 [2024-04-26 15:07:25.181345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.436 [2024-04-26 15:07:25.181374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.436 [2024-04-26 15:07:25.181403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.436 [2024-04-26 15:07:25.181434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.436 [2024-04-26 15:07:25.181463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.436 [2024-04-26 15:07:25.181494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.436 [2024-04-26 15:07:25.181523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.436 [2024-04-26 15:07:25.181553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.436 [2024-04-26 15:07:25.181583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.436 [2024-04-26 15:07:25.181612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.436 [2024-04-26 15:07:25.181642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.436 [2024-04-26 15:07:25.181672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.436 [2024-04-26 15:07:25.181706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.436 [2024-04-26 15:07:25.181736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.436 [2024-04-26 15:07:25.181767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.436 [2024-04-26 15:07:25.181796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.436 [2024-04-26 15:07:25.181811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.436 [2024-04-26 15:07:25.181826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.181841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.181856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.181871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.181886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.181901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.181916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.181931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.181945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.181960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.181974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.181989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.437 [2024-04-26 15:07:25.182458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.437 [2024-04-26 15:07:25.182487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.437 [2024-04-26 15:07:25.182516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.182981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.182995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.183009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.183046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.183075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.183104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.183134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.183163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.183196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.183225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.183254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.183283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.183311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.183340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.183369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.183398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.183427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.437 [2024-04-26 15:07:25.183455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.183502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81240 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.183515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.183544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.183556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81248 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.183569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.183603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.183615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81256 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.183628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.183653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.183664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81264 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.183677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.183700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.183712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81272 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.183724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.183748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.183759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81280 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.183772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.183796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.183807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81288 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.183820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.183844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.183855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81296 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.183867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.183895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.183906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81304 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.183919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.183943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.183955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81312 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.183972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.183986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.183997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.184008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81320 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.184027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.184076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.184091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.184102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81328 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.184115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.184128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.184140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.184151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81336 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.184164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.184177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.184188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.184200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81344 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.184212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.184225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.184236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.184247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81352 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.184260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.184273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.184284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.184295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81360 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.184308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.184324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.184335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.184347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81368 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.184360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.184373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.184384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.184399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81376 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.184413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.184427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.184438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.184449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81384 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.184462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.184483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.184495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.184507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81392 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.184520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.184533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.184544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.184556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81400 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.184569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.184582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.184593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.184605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81408 len:8 PRP1 0x0 PRP2 0x0 00:26:54.437 [2024-04-26 15:07:25.184617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.437 [2024-04-26 15:07:25.184631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.437 [2024-04-26 15:07:25.184642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.437 [2024-04-26 15:07:25.184653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81416 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.184666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.184679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.184690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.184701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81424 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.184714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.184727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.184738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.184749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81432 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.184762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.184775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.184789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.184801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81440 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.184814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.184827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.184837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.184848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81448 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.184861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.184879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.184891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.184902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81456 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.184914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.184927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.184938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.184949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81464 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.184961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.184973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.184984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.184995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81472 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.185007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.185037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.185049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81480 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.185073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.185096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.185107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81488 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.185119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.185142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.185153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81496 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.185165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.185195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.185206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81504 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.185219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.185243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.185254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81512 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.185267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.185296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.185315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81520 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.185328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.185352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.185363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81528 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.185375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.185400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.185411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81536 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.185424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.185448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.185460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80696 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.185472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.185496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.185507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80704 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.185521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.185544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.185556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80712 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.185573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.185598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.185609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80720 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.185622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.185646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.185657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80728 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.185669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.185695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.185706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80736 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.185719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.438 [2024-04-26 15:07:25.185743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.438 [2024-04-26 15:07:25.185754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80744 len:8 PRP1 0x0 PRP2 0x0 00:26:54.438 [2024-04-26 15:07:25.185766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185829] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x165a5e0 was disconnected and freed. reset controller. 00:26:54.438 [2024-04-26 15:07:25.185848] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:54.438 [2024-04-26 15:07:25.185882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.438 [2024-04-26 15:07:25.185900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.438 [2024-04-26 15:07:25.185929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.438 [2024-04-26 15:07:25.185960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.438 [2024-04-26 15:07:25.185986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:25.185999] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.438 [2024-04-26 15:07:25.186076] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163bad0 (9): Bad file descriptor 00:26:54.438 [2024-04-26 15:07:25.189277] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.438 [2024-04-26 15:07:25.219836] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:54.438 [2024-04-26 15:07:28.847147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.438 [2024-04-26 15:07:28.847908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.438 [2024-04-26 15:07:28.847939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.847969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.847984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.848012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.848036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.848051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.848066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.848080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.848095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.848109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.848124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.848138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.848153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.848167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.848182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.848197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.848212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.848225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.848241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.848254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.848269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.848282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.848297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.848311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.848330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.848344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.438 [2024-04-26 15:07:28.848360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.438 [2024-04-26 15:07:28.848373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.848418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.848446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.848474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.848501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.848528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.848556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.848584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.848612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.848640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.848669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.848700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.848729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.848756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.848785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.848813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.848841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.848869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.848897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.848926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.848955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.848969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.848982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.849034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.849064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.849097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.849127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.849157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.849187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.849216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.849244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.849983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.849998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.850012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.850051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.439 [2024-04-26 15:07:28.850079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.850980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.850996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.439 [2024-04-26 15:07:28.851011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.851048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.439 [2024-04-26 15:07:28.851064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.439 [2024-04-26 15:07:28.851077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77176 len:8 PRP1 0x0 PRP2 0x0 00:26:54.439 [2024-04-26 15:07:28.851090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.851159] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x165c400 was disconnected and freed. reset controller. 00:26:54.439 [2024-04-26 15:07:28.851178] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:54.439 [2024-04-26 15:07:28.851213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.439 [2024-04-26 15:07:28.851231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.851246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.439 [2024-04-26 15:07:28.851260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.439 [2024-04-26 15:07:28.851274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.439 [2024-04-26 15:07:28.851290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:28.851304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.440 [2024-04-26 15:07:28.851318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:28.851332] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.440 [2024-04-26 15:07:28.851385] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163bad0 (9): Bad file descriptor 00:26:54.440 [2024-04-26 15:07:28.854571] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.440 [2024-04-26 15:07:28.883187] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:54.440 [2024-04-26 15:07:33.415952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.440 [2024-04-26 15:07:33.416045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.416065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.440 [2024-04-26 15:07:33.416090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.416104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.440 [2024-04-26 15:07:33.416118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.416144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.440 [2024-04-26 15:07:33.416157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.416171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163bad0 is same with the state(5) to be set 00:26:54.440 [2024-04-26 15:07:33.417569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.417592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.417617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.417637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.417654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.417668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.417683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.417697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.417712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.417726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.417740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.417754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.417769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.417783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.417797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.417811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.417826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.417839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.417854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.417873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.417888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.417902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.417921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.417938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.417953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.417966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.417981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.418975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.418989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.440 [2024-04-26 15:07:33.419957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.440 [2024-04-26 15:07:33.419970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.419984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.419997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.420946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.441 [2024-04-26 15:07:33.420974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.420992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.441 [2024-04-26 15:07:33.421005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.421040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.441 [2024-04-26 15:07:33.421060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.421077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.441 [2024-04-26 15:07:33.421091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.421105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.441 [2024-04-26 15:07:33.421124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.421139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.441 [2024-04-26 15:07:33.421153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.421168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.441 [2024-04-26 15:07:33.421182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.421197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.421210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.421225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.421239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.421254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.421267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.421282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.421296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.421311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.421324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.421361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.421375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.421389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.421403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.421420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.421434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.421448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.441 [2024-04-26 15:07:33.421461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.421490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c580 is same with the state(5) to be set 00:26:54.441 [2024-04-26 15:07:33.421512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.441 [2024-04-26 15:07:33.421523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.441 [2024-04-26 15:07:33.421534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4888 len:8 PRP1 0x0 PRP2 0x0 00:26:54.441 [2024-04-26 15:07:33.421547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.441 [2024-04-26 15:07:33.421605] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x165c580 was disconnected and freed. reset controller. 00:26:54.441 [2024-04-26 15:07:33.421624] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:54.441 [2024-04-26 15:07:33.421638] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.441 [2024-04-26 15:07:33.424874] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.441 [2024-04-26 15:07:33.424912] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163bad0 (9): Bad file descriptor 00:26:54.441 [2024-04-26 15:07:33.545792] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:54.441 00:26:54.441 Latency(us) 00:26:54.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.441 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:54.441 Verification LBA range: start 0x0 length 0x4000 00:26:54.441 NVMe0n1 : 15.01 8506.05 33.23 478.37 0.00 14220.76 767.62 16408.27 00:26:54.441 =================================================================================================================== 00:26:54.441 Total : 8506.05 33.23 478.37 0.00 14220.76 767.62 16408.27 00:26:54.441 Received shutdown signal, test time was about 15.000000 seconds 00:26:54.441 00:26:54.441 Latency(us) 00:26:54.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.441 =================================================================================================================== 00:26:54.441 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:54.441 15:07:39 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:54.441 15:07:39 -- host/failover.sh@65 -- # count=3 00:26:54.441 15:07:39 -- host/failover.sh@67 -- # (( count != 3 )) 00:26:54.441 15:07:39 -- host/failover.sh@73 -- # bdevperf_pid=3873463 00:26:54.441 15:07:39 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:54.441 15:07:39 -- host/failover.sh@75 -- # waitforlisten 3873463 /var/tmp/bdevperf.sock 00:26:54.441 15:07:39 -- common/autotest_common.sh@817 -- # '[' -z 3873463 ']' 00:26:54.441 15:07:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:54.441 15:07:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:54.441 15:07:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:54.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:54.441 15:07:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:54.441 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:26:54.441 15:07:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:54.441 15:07:39 -- common/autotest_common.sh@850 -- # return 0 00:26:54.441 15:07:39 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:54.441 [2024-04-26 15:07:39.818062] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:54.441 15:07:39 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:54.441 [2024-04-26 15:07:40.090875] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:54.441 15:07:40 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:55.006 NVMe0n1 00:26:55.006 15:07:40 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:55.263 00:26:55.263 15:07:40 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:55.826 00:26:55.827 15:07:41 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:55.827 15:07:41 -- host/failover.sh@82 -- # grep -q NVMe0 00:26:56.083 15:07:41 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:56.340 15:07:41 -- host/failover.sh@87 -- # sleep 3 00:26:59.618 15:07:44 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:59.618 15:07:44 -- host/failover.sh@88 -- # grep -q NVMe0 00:26:59.618 15:07:45 -- host/failover.sh@90 -- # run_test_pid=3874126 00:26:59.618 15:07:45 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:59.618 15:07:45 -- host/failover.sh@92 -- # wait 3874126 00:27:00.552 0 00:27:00.552 15:07:46 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:00.552 [2024-04-26 15:07:39.361871] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:27:00.552 [2024-04-26 15:07:39.361951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3873463 ] 00:27:00.552 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.552 [2024-04-26 15:07:39.393835] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:00.552 [2024-04-26 15:07:39.422511] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.552 [2024-04-26 15:07:39.503932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.552 [2024-04-26 15:07:41.853902] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:00.552 [2024-04-26 15:07:41.853999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:00.552 [2024-04-26 15:07:41.854043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.552 [2024-04-26 15:07:41.854072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:00.552 [2024-04-26 15:07:41.854087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.552 [2024-04-26 15:07:41.854101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:00.552 [2024-04-26 15:07:41.854114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.552 [2024-04-26 15:07:41.854128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:00.552 [2024-04-26 15:07:41.854142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.552 [2024-04-26 15:07:41.854155] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.552 [2024-04-26 15:07:41.854215] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.552 [2024-04-26 15:07:41.854248] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1672ad0 (9): Bad file descriptor 00:27:00.552 [2024-04-26 15:07:41.905444] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:00.552 Running I/O for 1 seconds... 00:27:00.552 00:27:00.552 Latency(us) 00:27:00.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.552 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:00.552 Verification LBA range: start 0x0 length 0x4000 00:27:00.552 NVMe0n1 : 1.01 8525.31 33.30 0.00 0.00 14923.91 3034.07 15049.01 00:27:00.552 =================================================================================================================== 00:27:00.552 Total : 8525.31 33.30 0.00 0.00 14923.91 3034.07 15049.01 00:27:00.552 15:07:46 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:00.552 15:07:46 -- host/failover.sh@95 -- # grep -q NVMe0 00:27:00.810 15:07:46 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:01.067 15:07:46 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:01.067 15:07:46 -- host/failover.sh@99 -- # grep -q NVMe0 00:27:01.324 15:07:46 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:01.582 15:07:47 -- host/failover.sh@101 -- # sleep 3 00:27:04.859 15:07:50 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:04.859 15:07:50 -- host/failover.sh@103 -- # grep -q NVMe0 00:27:04.859 15:07:50 -- host/failover.sh@108 -- # killprocess 3873463 00:27:04.859 15:07:50 -- common/autotest_common.sh@936 -- # '[' -z 3873463 ']' 00:27:04.859 15:07:50 -- common/autotest_common.sh@940 -- # kill -0 3873463 00:27:04.859 15:07:50 -- common/autotest_common.sh@941 -- # uname 00:27:04.859 15:07:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:04.859 15:07:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3873463 00:27:04.859 15:07:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:04.859 15:07:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:04.859 15:07:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3873463' 00:27:04.859 killing process with pid 3873463 00:27:04.859 15:07:50 -- common/autotest_common.sh@955 -- # kill 3873463 00:27:04.859 15:07:50 -- common/autotest_common.sh@960 -- # wait 3873463 00:27:05.117 15:07:50 -- host/failover.sh@110 -- # sync 00:27:05.117 15:07:50 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:05.374 15:07:50 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:05.374 15:07:50 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:05.374 15:07:50 -- host/failover.sh@116 -- # nvmftestfini 00:27:05.374 15:07:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:05.374 15:07:50 -- nvmf/common.sh@117 -- # sync 00:27:05.374 15:07:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:05.374 15:07:50 -- nvmf/common.sh@120 -- # set +e 00:27:05.374 15:07:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:05.374 15:07:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:05.374 rmmod nvme_tcp 00:27:05.374 rmmod nvme_fabrics 00:27:05.374 rmmod nvme_keyring 00:27:05.374 15:07:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:05.374 15:07:51 -- nvmf/common.sh@124 -- # set -e 00:27:05.374 15:07:51 -- nvmf/common.sh@125 -- # return 0 00:27:05.374 15:07:51 -- nvmf/common.sh@478 -- # '[' -n 3871197 ']' 00:27:05.374 15:07:51 -- nvmf/common.sh@479 -- # killprocess 3871197 00:27:05.374 15:07:51 -- common/autotest_common.sh@936 -- # '[' -z 3871197 ']' 00:27:05.374 15:07:51 -- common/autotest_common.sh@940 -- # kill -0 3871197 00:27:05.374 15:07:51 -- common/autotest_common.sh@941 -- # uname 00:27:05.374 15:07:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:05.374 15:07:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3871197 00:27:05.374 15:07:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:05.374 15:07:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:05.374 15:07:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3871197' 00:27:05.374 killing process with pid 3871197 00:27:05.374 15:07:51 -- common/autotest_common.sh@955 -- # kill 3871197 00:27:05.374 15:07:51 -- common/autotest_common.sh@960 -- # wait 3871197 00:27:05.633 15:07:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:05.633 15:07:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:05.633 15:07:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:05.633 15:07:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:05.633 15:07:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:05.633 15:07:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.633 15:07:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.633 15:07:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.169 15:07:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:08.169 00:27:08.169 real 0m34.761s 00:27:08.169 user 2m2.714s 00:27:08.169 sys 0m6.066s 00:27:08.169 15:07:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:08.169 15:07:53 -- common/autotest_common.sh@10 -- # set +x 00:27:08.169 ************************************ 00:27:08.169 END TEST nvmf_failover 00:27:08.169 ************************************ 00:27:08.169 15:07:53 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:08.169 15:07:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:08.169 15:07:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:08.169 15:07:53 -- common/autotest_common.sh@10 -- # set +x 00:27:08.169 ************************************ 00:27:08.169 START TEST nvmf_discovery 00:27:08.169 ************************************ 00:27:08.169 15:07:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:08.169 * Looking for test storage... 00:27:08.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:08.169 15:07:53 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:08.169 15:07:53 -- nvmf/common.sh@7 -- # uname -s 00:27:08.169 15:07:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.169 15:07:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.169 15:07:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.169 15:07:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.169 15:07:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:08.169 15:07:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:08.169 15:07:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.169 15:07:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:08.169 15:07:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.169 15:07:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.169 15:07:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:08.169 15:07:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:08.169 15:07:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.169 15:07:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.169 15:07:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:08.169 15:07:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:08.169 15:07:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.169 15:07:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.169 15:07:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.169 15:07:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.169 15:07:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.169 15:07:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.169 15:07:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.169 15:07:53 -- paths/export.sh@5 -- # export PATH 00:27:08.169 15:07:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.169 15:07:53 -- nvmf/common.sh@47 -- # : 0 00:27:08.169 15:07:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:08.169 15:07:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:08.169 15:07:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:08.169 15:07:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.169 15:07:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.169 15:07:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:08.169 15:07:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:08.169 15:07:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:08.169 15:07:53 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:08.169 15:07:53 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:08.169 15:07:53 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:08.169 15:07:53 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:08.169 15:07:53 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:08.169 15:07:53 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:08.169 15:07:53 -- host/discovery.sh@25 -- # nvmftestinit 00:27:08.169 15:07:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:08.169 15:07:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:08.169 15:07:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:08.169 15:07:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:08.169 15:07:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:08.169 15:07:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.169 15:07:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:08.169 15:07:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.169 15:07:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:08.169 15:07:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:08.169 15:07:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:08.169 15:07:53 -- common/autotest_common.sh@10 -- # set +x 00:27:10.073 15:07:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:10.073 15:07:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:10.073 15:07:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:10.073 15:07:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:10.073 15:07:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:10.073 15:07:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:10.073 15:07:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:10.073 15:07:55 -- nvmf/common.sh@295 -- # net_devs=() 00:27:10.073 15:07:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:10.073 15:07:55 -- nvmf/common.sh@296 -- # e810=() 00:27:10.073 15:07:55 -- nvmf/common.sh@296 -- # local -ga e810 00:27:10.073 15:07:55 -- nvmf/common.sh@297 -- # x722=() 00:27:10.073 15:07:55 -- nvmf/common.sh@297 -- # local -ga x722 00:27:10.073 15:07:55 -- nvmf/common.sh@298 -- # mlx=() 00:27:10.073 15:07:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:10.074 15:07:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.074 15:07:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.074 15:07:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.074 15:07:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.074 15:07:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.074 15:07:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.074 15:07:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.074 15:07:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.074 15:07:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.074 15:07:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.074 15:07:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.074 15:07:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:10.074 15:07:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:10.074 15:07:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:10.074 15:07:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:10.074 15:07:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:10.074 15:07:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:10.074 15:07:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.074 15:07:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:10.074 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:10.074 15:07:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.074 15:07:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.074 15:07:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.074 15:07:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.074 15:07:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.074 15:07:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.074 15:07:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:10.074 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:10.074 15:07:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.074 15:07:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.074 15:07:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.074 15:07:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.074 15:07:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.074 15:07:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:10.074 15:07:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:10.074 15:07:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:10.074 15:07:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.074 15:07:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.074 15:07:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:10.074 15:07:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.074 15:07:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:10.074 Found net devices under 0000:84:00.0: cvl_0_0 00:27:10.074 15:07:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.074 15:07:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.074 15:07:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.074 15:07:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:10.074 15:07:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.074 15:07:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:10.074 Found net devices under 0000:84:00.1: cvl_0_1 00:27:10.074 15:07:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.074 15:07:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:10.074 15:07:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:10.074 15:07:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:10.074 15:07:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:10.074 15:07:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:10.074 15:07:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.074 15:07:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.074 15:07:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.074 15:07:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:10.074 15:07:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.074 15:07:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.074 15:07:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:10.074 15:07:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.074 15:07:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.074 15:07:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:10.074 15:07:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:10.074 15:07:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.074 15:07:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.074 15:07:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.074 15:07:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.074 15:07:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:10.074 15:07:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.074 15:07:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.074 15:07:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.074 15:07:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:10.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:27:10.074 00:27:10.074 --- 10.0.0.2 ping statistics --- 00:27:10.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.074 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:27:10.074 15:07:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:27:10.074 00:27:10.074 --- 10.0.0.1 ping statistics --- 00:27:10.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.074 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:27:10.074 15:07:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.074 15:07:55 -- nvmf/common.sh@411 -- # return 0 00:27:10.074 15:07:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:10.074 15:07:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.074 15:07:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:10.074 15:07:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:10.074 15:07:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.074 15:07:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:10.074 15:07:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:10.074 15:07:55 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:10.074 15:07:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:10.074 15:07:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:10.074 15:07:55 -- common/autotest_common.sh@10 -- # set +x 00:27:10.074 15:07:55 -- nvmf/common.sh@470 -- # nvmfpid=3876755 00:27:10.074 15:07:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:10.074 15:07:55 -- nvmf/common.sh@471 -- # waitforlisten 3876755 00:27:10.074 15:07:55 -- common/autotest_common.sh@817 -- # '[' -z 3876755 ']' 00:27:10.074 15:07:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.074 15:07:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:10.074 15:07:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.074 15:07:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:10.074 15:07:55 -- common/autotest_common.sh@10 -- # set +x 00:27:10.074 [2024-04-26 15:07:55.720612] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:27:10.074 [2024-04-26 15:07:55.720713] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.074 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.074 [2024-04-26 15:07:55.759773] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:10.074 [2024-04-26 15:07:55.791215] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.333 [2024-04-26 15:07:55.883280] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.333 [2024-04-26 15:07:55.883358] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.333 [2024-04-26 15:07:55.883375] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.333 [2024-04-26 15:07:55.883389] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.333 [2024-04-26 15:07:55.883402] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.333 [2024-04-26 15:07:55.883440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.333 15:07:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:10.333 15:07:56 -- common/autotest_common.sh@850 -- # return 0 00:27:10.333 15:07:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:10.333 15:07:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:10.333 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:10.333 15:07:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.333 15:07:56 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:10.333 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.333 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:10.334 [2024-04-26 15:07:56.033142] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.334 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.334 15:07:56 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:10.334 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.334 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:10.334 [2024-04-26 15:07:56.041387] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:10.334 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.334 15:07:56 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:10.334 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.334 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:10.334 null0 00:27:10.334 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.334 15:07:56 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:10.334 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.334 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:10.334 null1 00:27:10.334 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.334 15:07:56 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:10.334 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.334 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:10.334 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.334 15:07:56 -- host/discovery.sh@45 -- # hostpid=3876889 00:27:10.334 15:07:56 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:10.334 15:07:56 -- host/discovery.sh@46 -- # waitforlisten 3876889 /tmp/host.sock 00:27:10.334 15:07:56 -- common/autotest_common.sh@817 -- # '[' -z 3876889 ']' 00:27:10.334 15:07:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:27:10.334 15:07:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:10.334 15:07:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:10.334 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:10.334 15:07:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:10.334 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:10.592 [2024-04-26 15:07:56.113724] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:27:10.592 [2024-04-26 15:07:56.113797] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3876889 ] 00:27:10.592 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.592 [2024-04-26 15:07:56.145915] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:10.592 [2024-04-26 15:07:56.177175] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.592 [2024-04-26 15:07:56.267173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.851 15:07:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:10.851 15:07:56 -- common/autotest_common.sh@850 -- # return 0 00:27:10.851 15:07:56 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:10.851 15:07:56 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:10.851 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.851 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:10.851 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.851 15:07:56 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:10.851 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.851 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:10.851 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.851 15:07:56 -- host/discovery.sh@72 -- # notify_id=0 00:27:10.851 15:07:56 -- host/discovery.sh@83 -- # get_subsystem_names 00:27:10.851 15:07:56 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:10.851 15:07:56 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:10.851 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.851 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:10.851 15:07:56 -- host/discovery.sh@59 -- # sort 00:27:10.851 15:07:56 -- host/discovery.sh@59 -- # xargs 00:27:10.851 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.851 15:07:56 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:10.851 15:07:56 -- host/discovery.sh@84 -- # get_bdev_list 00:27:10.851 15:07:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:10.851 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.851 15:07:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:10.851 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:10.851 15:07:56 -- host/discovery.sh@55 -- # sort 00:27:10.851 15:07:56 -- host/discovery.sh@55 -- # xargs 00:27:10.851 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.851 15:07:56 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:10.851 15:07:56 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:10.851 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.851 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:10.851 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.851 15:07:56 -- host/discovery.sh@87 -- # get_subsystem_names 00:27:10.851 15:07:56 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:10.851 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.851 15:07:56 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:10.851 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:10.851 15:07:56 -- host/discovery.sh@59 -- # sort 00:27:10.851 15:07:56 -- host/discovery.sh@59 -- # xargs 00:27:10.851 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.851 15:07:56 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:10.851 15:07:56 -- host/discovery.sh@88 -- # get_bdev_list 00:27:10.851 15:07:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:10.851 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.851 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:10.851 15:07:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:10.851 15:07:56 -- host/discovery.sh@55 -- # sort 00:27:10.851 15:07:56 -- host/discovery.sh@55 -- # xargs 00:27:10.851 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.851 15:07:56 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:10.851 15:07:56 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:10.851 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.851 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:10.851 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.851 15:07:56 -- host/discovery.sh@91 -- # get_subsystem_names 00:27:10.851 15:07:56 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:10.851 15:07:56 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:10.851 15:07:56 -- host/discovery.sh@59 -- # sort 00:27:10.851 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.851 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:10.851 15:07:56 -- host/discovery.sh@59 -- # xargs 00:27:10.851 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.109 15:07:56 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:11.109 15:07:56 -- host/discovery.sh@92 -- # get_bdev_list 00:27:11.109 15:07:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:11.109 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.110 15:07:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:11.110 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:11.110 15:07:56 -- host/discovery.sh@55 -- # sort 00:27:11.110 15:07:56 -- host/discovery.sh@55 -- # xargs 00:27:11.110 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.110 15:07:56 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:11.110 15:07:56 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:11.110 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.110 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:11.110 [2024-04-26 15:07:56.646952] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.110 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.110 15:07:56 -- host/discovery.sh@97 -- # get_subsystem_names 00:27:11.110 15:07:56 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:11.110 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.110 15:07:56 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:11.110 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:11.110 15:07:56 -- host/discovery.sh@59 -- # sort 00:27:11.110 15:07:56 -- host/discovery.sh@59 -- # xargs 00:27:11.110 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.110 15:07:56 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:11.110 15:07:56 -- host/discovery.sh@98 -- # get_bdev_list 00:27:11.110 15:07:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:11.110 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.110 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:11.110 15:07:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:11.110 15:07:56 -- host/discovery.sh@55 -- # sort 00:27:11.110 15:07:56 -- host/discovery.sh@55 -- # xargs 00:27:11.110 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.110 15:07:56 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:11.110 15:07:56 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:11.110 15:07:56 -- host/discovery.sh@79 -- # expected_count=0 00:27:11.110 15:07:56 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:11.110 15:07:56 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:11.110 15:07:56 -- common/autotest_common.sh@901 -- # local max=10 00:27:11.110 15:07:56 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:11.110 15:07:56 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:11.110 15:07:56 -- common/autotest_common.sh@903 -- # get_notification_count 00:27:11.110 15:07:56 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:11.110 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.110 15:07:56 -- host/discovery.sh@74 -- # jq '. | length' 00:27:11.110 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:11.110 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.110 15:07:56 -- host/discovery.sh@74 -- # notification_count=0 00:27:11.110 15:07:56 -- host/discovery.sh@75 -- # notify_id=0 00:27:11.110 15:07:56 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:27:11.110 15:07:56 -- common/autotest_common.sh@904 -- # return 0 00:27:11.110 15:07:56 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:11.110 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.110 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:11.110 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.110 15:07:56 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:11.110 15:07:56 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:11.110 15:07:56 -- common/autotest_common.sh@901 -- # local max=10 00:27:11.110 15:07:56 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:11.110 15:07:56 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:11.110 15:07:56 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:27:11.110 15:07:56 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:11.110 15:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.110 15:07:56 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:11.110 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:27:11.110 15:07:56 -- host/discovery.sh@59 -- # sort 00:27:11.110 15:07:56 -- host/discovery.sh@59 -- # xargs 00:27:11.110 15:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.110 15:07:56 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:27:11.110 15:07:56 -- common/autotest_common.sh@906 -- # sleep 1 00:27:12.044 [2024-04-26 15:07:57.452190] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:12.044 [2024-04-26 15:07:57.452219] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:12.044 [2024-04-26 15:07:57.452240] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:12.044 [2024-04-26 15:07:57.539552] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:12.044 [2024-04-26 15:07:57.724722] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:12.044 [2024-04-26 15:07:57.724750] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:12.302 15:07:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:12.302 15:07:57 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:12.302 15:07:57 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:27:12.302 15:07:57 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:12.303 15:07:57 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:12.303 15:07:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.303 15:07:57 -- host/discovery.sh@59 -- # sort 00:27:12.303 15:07:57 -- common/autotest_common.sh@10 -- # set +x 00:27:12.303 15:07:57 -- host/discovery.sh@59 -- # xargs 00:27:12.303 15:07:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.303 15:07:57 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.303 15:07:57 -- common/autotest_common.sh@904 -- # return 0 00:27:12.303 15:07:57 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:12.303 15:07:57 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:12.303 15:07:57 -- common/autotest_common.sh@901 -- # local max=10 00:27:12.303 15:07:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:12.303 15:07:57 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:12.303 15:07:57 -- common/autotest_common.sh@903 -- # get_bdev_list 00:27:12.303 15:07:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.303 15:07:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:12.303 15:07:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.303 15:07:57 -- common/autotest_common.sh@10 -- # set +x 00:27:12.303 15:07:57 -- host/discovery.sh@55 -- # sort 00:27:12.303 15:07:57 -- host/discovery.sh@55 -- # xargs 00:27:12.303 15:07:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.303 15:07:57 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:12.303 15:07:57 -- common/autotest_common.sh@904 -- # return 0 00:27:12.303 15:07:57 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:12.303 15:07:57 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:12.303 15:07:57 -- common/autotest_common.sh@901 -- # local max=10 00:27:12.303 15:07:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:12.303 15:07:57 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:12.303 15:07:57 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:27:12.303 15:07:57 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:12.303 15:07:57 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:12.303 15:07:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.303 15:07:57 -- common/autotest_common.sh@10 -- # set +x 00:27:12.303 15:07:57 -- host/discovery.sh@63 -- # sort -n 00:27:12.303 15:07:57 -- host/discovery.sh@63 -- # xargs 00:27:12.303 15:07:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.303 15:07:57 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:27:12.303 15:07:57 -- common/autotest_common.sh@904 -- # return 0 00:27:12.303 15:07:57 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:12.303 15:07:57 -- host/discovery.sh@79 -- # expected_count=1 00:27:12.303 15:07:57 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:12.303 15:07:57 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:12.303 15:07:57 -- common/autotest_common.sh@901 -- # local max=10 00:27:12.303 15:07:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:12.303 15:07:57 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:12.303 15:07:57 -- common/autotest_common.sh@903 -- # get_notification_count 00:27:12.303 15:07:57 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:12.303 15:07:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.303 15:07:57 -- host/discovery.sh@74 -- # jq '. | length' 00:27:12.303 15:07:57 -- common/autotest_common.sh@10 -- # set +x 00:27:12.303 15:07:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.303 15:07:57 -- host/discovery.sh@74 -- # notification_count=1 00:27:12.303 15:07:57 -- host/discovery.sh@75 -- # notify_id=1 00:27:12.303 15:07:57 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:27:12.303 15:07:57 -- common/autotest_common.sh@904 -- # return 0 00:27:12.303 15:07:57 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:12.303 15:07:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.303 15:07:57 -- common/autotest_common.sh@10 -- # set +x 00:27:12.303 15:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.303 15:07:58 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:12.303 15:07:58 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:12.303 15:07:58 -- common/autotest_common.sh@901 -- # local max=10 00:27:12.303 15:07:58 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:12.303 15:07:58 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:12.303 15:07:58 -- common/autotest_common.sh@903 -- # get_bdev_list 00:27:12.303 15:07:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.303 15:07:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:12.303 15:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.303 15:07:58 -- common/autotest_common.sh@10 -- # set +x 00:27:12.303 15:07:58 -- host/discovery.sh@55 -- # sort 00:27:12.303 15:07:58 -- host/discovery.sh@55 -- # xargs 00:27:12.303 15:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.561 15:07:58 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:12.561 15:07:58 -- common/autotest_common.sh@904 -- # return 0 00:27:12.561 15:07:58 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:12.561 15:07:58 -- host/discovery.sh@79 -- # expected_count=1 00:27:12.561 15:07:58 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:12.561 15:07:58 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:12.561 15:07:58 -- common/autotest_common.sh@901 -- # local max=10 00:27:12.561 15:07:58 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:12.561 15:07:58 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:12.561 15:07:58 -- common/autotest_common.sh@903 -- # get_notification_count 00:27:12.561 15:07:58 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:12.561 15:07:58 -- host/discovery.sh@74 -- # jq '. | length' 00:27:12.561 15:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.561 15:07:58 -- common/autotest_common.sh@10 -- # set +x 00:27:12.561 15:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.561 15:07:58 -- host/discovery.sh@74 -- # notification_count=1 00:27:12.561 15:07:58 -- host/discovery.sh@75 -- # notify_id=2 00:27:12.561 15:07:58 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:27:12.561 15:07:58 -- common/autotest_common.sh@904 -- # return 0 00:27:12.561 15:07:58 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:12.561 15:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.561 15:07:58 -- common/autotest_common.sh@10 -- # set +x 00:27:12.561 [2024-04-26 15:07:58.091286] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:12.561 [2024-04-26 15:07:58.092334] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:12.561 [2024-04-26 15:07:58.092369] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:12.561 15:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.561 15:07:58 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:12.561 15:07:58 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:12.561 15:07:58 -- common/autotest_common.sh@901 -- # local max=10 00:27:12.561 15:07:58 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:12.561 15:07:58 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:12.561 15:07:58 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:27:12.561 15:07:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:12.561 15:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.561 15:07:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:12.561 15:07:58 -- common/autotest_common.sh@10 -- # set +x 00:27:12.561 15:07:58 -- host/discovery.sh@59 -- # sort 00:27:12.562 15:07:58 -- host/discovery.sh@59 -- # xargs 00:27:12.562 15:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.562 15:07:58 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.562 15:07:58 -- common/autotest_common.sh@904 -- # return 0 00:27:12.562 15:07:58 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:12.562 15:07:58 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:12.562 15:07:58 -- common/autotest_common.sh@901 -- # local max=10 00:27:12.562 15:07:58 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:12.562 15:07:58 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:12.562 15:07:58 -- common/autotest_common.sh@903 -- # get_bdev_list 00:27:12.562 15:07:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.562 15:07:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:12.562 15:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.562 15:07:58 -- common/autotest_common.sh@10 -- # set +x 00:27:12.562 15:07:58 -- host/discovery.sh@55 -- # sort 00:27:12.562 15:07:58 -- host/discovery.sh@55 -- # xargs 00:27:12.562 15:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.562 15:07:58 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:12.562 15:07:58 -- common/autotest_common.sh@904 -- # return 0 00:27:12.562 15:07:58 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:12.562 15:07:58 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:12.562 15:07:58 -- common/autotest_common.sh@901 -- # local max=10 00:27:12.562 15:07:58 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:12.562 15:07:58 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:12.562 15:07:58 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:27:12.562 15:07:58 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:12.562 15:07:58 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:12.562 15:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.562 15:07:58 -- common/autotest_common.sh@10 -- # set +x 00:27:12.562 15:07:58 -- host/discovery.sh@63 -- # sort -n 00:27:12.562 15:07:58 -- host/discovery.sh@63 -- # xargs 00:27:12.562 15:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.562 [2024-04-26 15:07:58.220047] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:12.562 15:07:58 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:12.562 15:07:58 -- common/autotest_common.sh@906 -- # sleep 1 00:27:12.562 [2024-04-26 15:07:58.283654] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:12.562 [2024-04-26 15:07:58.283681] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:12.562 [2024-04-26 15:07:58.283692] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:13.494 15:07:59 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:13.494 15:07:59 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:13.494 15:07:59 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:27:13.494 15:07:59 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:13.494 15:07:59 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:13.494 15:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.494 15:07:59 -- common/autotest_common.sh@10 -- # set +x 00:27:13.494 15:07:59 -- host/discovery.sh@63 -- # sort -n 00:27:13.494 15:07:59 -- host/discovery.sh@63 -- # xargs 00:27:13.754 15:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.754 15:07:59 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:13.754 15:07:59 -- common/autotest_common.sh@904 -- # return 0 00:27:13.754 15:07:59 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:13.754 15:07:59 -- host/discovery.sh@79 -- # expected_count=0 00:27:13.754 15:07:59 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:13.754 15:07:59 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:13.754 15:07:59 -- common/autotest_common.sh@901 -- # local max=10 00:27:13.754 15:07:59 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:13.754 15:07:59 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:13.754 15:07:59 -- common/autotest_common.sh@903 -- # get_notification_count 00:27:13.754 15:07:59 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:13.754 15:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.754 15:07:59 -- host/discovery.sh@74 -- # jq '. | length' 00:27:13.754 15:07:59 -- common/autotest_common.sh@10 -- # set +x 00:27:13.754 15:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.754 15:07:59 -- host/discovery.sh@74 -- # notification_count=0 00:27:13.754 15:07:59 -- host/discovery.sh@75 -- # notify_id=2 00:27:13.754 15:07:59 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:27:13.754 15:07:59 -- common/autotest_common.sh@904 -- # return 0 00:27:13.754 15:07:59 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:13.754 15:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.754 15:07:59 -- common/autotest_common.sh@10 -- # set +x 00:27:13.754 [2024-04-26 15:07:59.315774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:13.754 [2024-04-26 15:07:59.315812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.754 [2024-04-26 15:07:59.315833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:13.754 [2024-04-26 15:07:59.315849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.754 [2024-04-26 15:07:59.315866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:13.754 [2024-04-26 15:07:59.315881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.754 [2024-04-26 15:07:59.315897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:13.754 [2024-04-26 15:07:59.315913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.754 [2024-04-26 15:07:59.315928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1751e70 is same with the state(5) to be set 00:27:13.754 [2024-04-26 15:07:59.316015] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:13.754 [2024-04-26 15:07:59.316066] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:13.754 15:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.754 15:07:59 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:13.754 15:07:59 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:13.754 15:07:59 -- common/autotest_common.sh@901 -- # local max=10 00:27:13.754 15:07:59 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:13.754 15:07:59 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:13.755 15:07:59 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:27:13.755 15:07:59 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:13.755 15:07:59 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:13.755 15:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.755 15:07:59 -- common/autotest_common.sh@10 -- # set +x 00:27:13.755 15:07:59 -- host/discovery.sh@59 -- # sort 00:27:13.755 15:07:59 -- host/discovery.sh@59 -- # xargs 00:27:13.755 [2024-04-26 15:07:59.325779] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1751e70 (9): Bad file descriptor 00:27:13.755 15:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.755 [2024-04-26 15:07:59.335823] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:13.755 [2024-04-26 15:07:59.336076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-04-26 15:07:59.336248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-04-26 15:07:59.336274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1751e70 with addr=10.0.0.2, port=4420 00:27:13.755 [2024-04-26 15:07:59.336290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1751e70 is same with the state(5) to be set 00:27:13.755 [2024-04-26 15:07:59.336331] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1751e70 (9): Bad file descriptor 00:27:13.755 [2024-04-26 15:07:59.336374] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:13.755 [2024-04-26 15:07:59.336395] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:13.755 [2024-04-26 15:07:59.336410] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:13.755 [2024-04-26 15:07:59.336445] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.755 [2024-04-26 15:07:59.345904] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:13.755 [2024-04-26 15:07:59.346105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-04-26 15:07:59.346259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-04-26 15:07:59.346285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1751e70 with addr=10.0.0.2, port=4420 00:27:13.755 [2024-04-26 15:07:59.346315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1751e70 is same with the state(5) to be set 00:27:13.755 [2024-04-26 15:07:59.346337] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1751e70 (9): Bad file descriptor 00:27:13.755 [2024-04-26 15:07:59.346358] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:13.755 [2024-04-26 15:07:59.346389] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:13.755 [2024-04-26 15:07:59.346404] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:13.755 [2024-04-26 15:07:59.346425] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.755 [2024-04-26 15:07:59.355980] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:13.755 [2024-04-26 15:07:59.356188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-04-26 15:07:59.356319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-04-26 15:07:59.356344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1751e70 with addr=10.0.0.2, port=4420 00:27:13.755 [2024-04-26 15:07:59.356359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1751e70 is same with the state(5) to be set 00:27:13.755 [2024-04-26 15:07:59.356398] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1751e70 (9): Bad file descriptor 00:27:13.755 [2024-04-26 15:07:59.356421] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:13.755 [2024-04-26 15:07:59.356437] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:13.755 [2024-04-26 15:07:59.356452] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:13.755 [2024-04-26 15:07:59.356478] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.755 15:07:59 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.755 15:07:59 -- common/autotest_common.sh@904 -- # return 0 00:27:13.755 15:07:59 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:13.755 15:07:59 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:13.755 15:07:59 -- common/autotest_common.sh@901 -- # local max=10 00:27:13.755 15:07:59 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:13.755 15:07:59 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:13.755 15:07:59 -- common/autotest_common.sh@903 -- # get_bdev_list 00:27:13.755 15:07:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:13.755 15:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.755 15:07:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:13.755 15:07:59 -- common/autotest_common.sh@10 -- # set +x 00:27:13.755 15:07:59 -- host/discovery.sh@55 -- # sort 00:27:13.755 15:07:59 -- host/discovery.sh@55 -- # xargs 00:27:13.755 [2024-04-26 15:07:59.366586] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:13.755 [2024-04-26 15:07:59.366781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-04-26 15:07:59.366971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-04-26 15:07:59.367001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1751e70 with addr=10.0.0.2, port=4420 00:27:13.755 [2024-04-26 15:07:59.367029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1751e70 is same with the state(5) to be set 00:27:13.755 [2024-04-26 15:07:59.367071] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1751e70 (9): Bad file descriptor 00:27:13.755 [2024-04-26 15:07:59.367092] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:13.755 [2024-04-26 15:07:59.367106] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:13.755 [2024-04-26 15:07:59.367119] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:13.755 [2024-04-26 15:07:59.367139] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.755 [2024-04-26 15:07:59.376665] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:13.755 [2024-04-26 15:07:59.376941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-04-26 15:07:59.377111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-04-26 15:07:59.377138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1751e70 with addr=10.0.0.2, port=4420 00:27:13.755 [2024-04-26 15:07:59.377154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1751e70 is same with the state(5) to be set 00:27:13.755 [2024-04-26 15:07:59.377176] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1751e70 (9): Bad file descriptor 00:27:13.755 [2024-04-26 15:07:59.377197] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:13.755 [2024-04-26 15:07:59.377211] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:13.755 [2024-04-26 15:07:59.377224] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:13.755 [2024-04-26 15:07:59.377242] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.755 [2024-04-26 15:07:59.386740] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:13.755 [2024-04-26 15:07:59.386889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-04-26 15:07:59.387079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-04-26 15:07:59.387106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1751e70 with addr=10.0.0.2, port=4420 00:27:13.755 [2024-04-26 15:07:59.387122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1751e70 is same with the state(5) to be set 00:27:13.755 [2024-04-26 15:07:59.387149] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1751e70 (9): Bad file descriptor 00:27:13.755 [2024-04-26 15:07:59.387170] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:13.755 [2024-04-26 15:07:59.387185] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:13.755 [2024-04-26 15:07:59.387198] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:13.755 [2024-04-26 15:07:59.387216] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.755 15:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.755 [2024-04-26 15:07:59.396814] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:13.755 [2024-04-26 15:07:59.396981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-04-26 15:07:59.397145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-04-26 15:07:59.397171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1751e70 with addr=10.0.0.2, port=4420 00:27:13.755 [2024-04-26 15:07:59.397187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1751e70 is same with the state(5) to be set 00:27:13.755 [2024-04-26 15:07:59.397209] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1751e70 (9): Bad file descriptor 00:27:13.755 [2024-04-26 15:07:59.397230] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:13.755 [2024-04-26 15:07:59.397244] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:13.755 [2024-04-26 15:07:59.397257] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:13.755 [2024-04-26 15:07:59.397276] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.755 15:07:59 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:13.755 15:07:59 -- common/autotest_common.sh@904 -- # return 0 00:27:13.755 15:07:59 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:13.755 15:07:59 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:13.755 15:07:59 -- common/autotest_common.sh@901 -- # local max=10 00:27:13.755 15:07:59 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:13.755 15:07:59 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:13.755 [2024-04-26 15:07:59.402700] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:13.755 [2024-04-26 15:07:59.402733] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:13.756 15:07:59 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:27:13.756 15:07:59 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:13.756 15:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.756 15:07:59 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:13.756 15:07:59 -- common/autotest_common.sh@10 -- # set +x 00:27:13.756 15:07:59 -- host/discovery.sh@63 -- # sort -n 00:27:13.756 15:07:59 -- host/discovery.sh@63 -- # xargs 00:27:13.756 15:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.756 15:07:59 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:27:13.756 15:07:59 -- common/autotest_common.sh@904 -- # return 0 00:27:13.756 15:07:59 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:13.756 15:07:59 -- host/discovery.sh@79 -- # expected_count=0 00:27:13.756 15:07:59 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:13.756 15:07:59 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:13.756 15:07:59 -- common/autotest_common.sh@901 -- # local max=10 00:27:13.756 15:07:59 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:13.756 15:07:59 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:13.756 15:07:59 -- common/autotest_common.sh@903 -- # get_notification_count 00:27:13.756 15:07:59 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:13.756 15:07:59 -- host/discovery.sh@74 -- # jq '. | length' 00:27:13.756 15:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.756 15:07:59 -- common/autotest_common.sh@10 -- # set +x 00:27:13.756 15:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.756 15:07:59 -- host/discovery.sh@74 -- # notification_count=0 00:27:13.756 15:07:59 -- host/discovery.sh@75 -- # notify_id=2 00:27:13.756 15:07:59 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:27:13.756 15:07:59 -- common/autotest_common.sh@904 -- # return 0 00:27:13.756 15:07:59 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:13.756 15:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.756 15:07:59 -- common/autotest_common.sh@10 -- # set +x 00:27:14.014 15:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.014 15:07:59 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:14.014 15:07:59 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:14.014 15:07:59 -- common/autotest_common.sh@901 -- # local max=10 00:27:14.014 15:07:59 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:14.014 15:07:59 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:14.014 15:07:59 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:27:14.014 15:07:59 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:14.014 15:07:59 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:14.014 15:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.014 15:07:59 -- common/autotest_common.sh@10 -- # set +x 00:27:14.014 15:07:59 -- host/discovery.sh@59 -- # sort 00:27:14.014 15:07:59 -- host/discovery.sh@59 -- # xargs 00:27:14.014 15:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.014 15:07:59 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:27:14.014 15:07:59 -- common/autotest_common.sh@904 -- # return 0 00:27:14.014 15:07:59 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:14.014 15:07:59 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:14.014 15:07:59 -- common/autotest_common.sh@901 -- # local max=10 00:27:14.014 15:07:59 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:14.014 15:07:59 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:14.014 15:07:59 -- common/autotest_common.sh@903 -- # get_bdev_list 00:27:14.014 15:07:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.014 15:07:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:14.014 15:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.014 15:07:59 -- common/autotest_common.sh@10 -- # set +x 00:27:14.014 15:07:59 -- host/discovery.sh@55 -- # sort 00:27:14.014 15:07:59 -- host/discovery.sh@55 -- # xargs 00:27:14.014 15:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.014 15:07:59 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:27:14.014 15:07:59 -- common/autotest_common.sh@904 -- # return 0 00:27:14.014 15:07:59 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:14.014 15:07:59 -- host/discovery.sh@79 -- # expected_count=2 00:27:14.014 15:07:59 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:14.014 15:07:59 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:14.014 15:07:59 -- common/autotest_common.sh@901 -- # local max=10 00:27:14.014 15:07:59 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:14.014 15:07:59 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:14.014 15:07:59 -- common/autotest_common.sh@903 -- # get_notification_count 00:27:14.014 15:07:59 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:14.014 15:07:59 -- host/discovery.sh@74 -- # jq '. | length' 00:27:14.014 15:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.014 15:07:59 -- common/autotest_common.sh@10 -- # set +x 00:27:14.014 15:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.014 15:07:59 -- host/discovery.sh@74 -- # notification_count=2 00:27:14.014 15:07:59 -- host/discovery.sh@75 -- # notify_id=4 00:27:14.014 15:07:59 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:27:14.014 15:07:59 -- common/autotest_common.sh@904 -- # return 0 00:27:14.014 15:07:59 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:14.014 15:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.014 15:07:59 -- common/autotest_common.sh@10 -- # set +x 00:27:14.950 [2024-04-26 15:08:00.679163] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:14.950 [2024-04-26 15:08:00.679209] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:14.950 [2024-04-26 15:08:00.679232] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:15.208 [2024-04-26 15:08:00.765470] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:15.466 [2024-04-26 15:08:01.036536] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:15.467 [2024-04-26 15:08:01.036600] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:15.467 15:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.467 15:08:01 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:15.467 15:08:01 -- common/autotest_common.sh@638 -- # local es=0 00:27:15.467 15:08:01 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:15.467 15:08:01 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:15.467 15:08:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:15.467 15:08:01 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:15.467 15:08:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:15.467 15:08:01 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:15.467 15:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.467 15:08:01 -- common/autotest_common.sh@10 -- # set +x 00:27:15.467 request: 00:27:15.467 { 00:27:15.467 "name": "nvme", 00:27:15.467 "trtype": "tcp", 00:27:15.467 "traddr": "10.0.0.2", 00:27:15.467 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:15.467 "adrfam": "ipv4", 00:27:15.467 "trsvcid": "8009", 00:27:15.467 "wait_for_attach": true, 00:27:15.467 "method": "bdev_nvme_start_discovery", 00:27:15.467 "req_id": 1 00:27:15.467 } 00:27:15.467 Got JSON-RPC error response 00:27:15.467 response: 00:27:15.467 { 00:27:15.467 "code": -17, 00:27:15.467 "message": "File exists" 00:27:15.467 } 00:27:15.467 15:08:01 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:15.467 15:08:01 -- common/autotest_common.sh@641 -- # es=1 00:27:15.467 15:08:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:15.467 15:08:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:15.467 15:08:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:15.467 15:08:01 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:15.467 15:08:01 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:15.467 15:08:01 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:15.467 15:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.467 15:08:01 -- host/discovery.sh@67 -- # sort 00:27:15.467 15:08:01 -- common/autotest_common.sh@10 -- # set +x 00:27:15.467 15:08:01 -- host/discovery.sh@67 -- # xargs 00:27:15.467 15:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.467 15:08:01 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:15.467 15:08:01 -- host/discovery.sh@146 -- # get_bdev_list 00:27:15.467 15:08:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:15.467 15:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.467 15:08:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:15.467 15:08:01 -- common/autotest_common.sh@10 -- # set +x 00:27:15.467 15:08:01 -- host/discovery.sh@55 -- # sort 00:27:15.467 15:08:01 -- host/discovery.sh@55 -- # xargs 00:27:15.467 15:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.467 15:08:01 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:15.467 15:08:01 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:15.467 15:08:01 -- common/autotest_common.sh@638 -- # local es=0 00:27:15.467 15:08:01 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:15.467 15:08:01 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:15.467 15:08:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:15.467 15:08:01 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:15.467 15:08:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:15.467 15:08:01 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:15.467 15:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.467 15:08:01 -- common/autotest_common.sh@10 -- # set +x 00:27:15.467 request: 00:27:15.467 { 00:27:15.467 "name": "nvme_second", 00:27:15.467 "trtype": "tcp", 00:27:15.467 "traddr": "10.0.0.2", 00:27:15.467 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:15.467 "adrfam": "ipv4", 00:27:15.467 "trsvcid": "8009", 00:27:15.467 "wait_for_attach": true, 00:27:15.467 "method": "bdev_nvme_start_discovery", 00:27:15.467 "req_id": 1 00:27:15.467 } 00:27:15.467 Got JSON-RPC error response 00:27:15.467 response: 00:27:15.467 { 00:27:15.467 "code": -17, 00:27:15.467 "message": "File exists" 00:27:15.467 } 00:27:15.467 15:08:01 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:15.467 15:08:01 -- common/autotest_common.sh@641 -- # es=1 00:27:15.467 15:08:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:15.467 15:08:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:15.467 15:08:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:15.467 15:08:01 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:15.467 15:08:01 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:15.467 15:08:01 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:15.467 15:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.467 15:08:01 -- common/autotest_common.sh@10 -- # set +x 00:27:15.467 15:08:01 -- host/discovery.sh@67 -- # sort 00:27:15.467 15:08:01 -- host/discovery.sh@67 -- # xargs 00:27:15.467 15:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.467 15:08:01 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:15.467 15:08:01 -- host/discovery.sh@152 -- # get_bdev_list 00:27:15.467 15:08:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:15.467 15:08:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:15.467 15:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.467 15:08:01 -- common/autotest_common.sh@10 -- # set +x 00:27:15.467 15:08:01 -- host/discovery.sh@55 -- # sort 00:27:15.467 15:08:01 -- host/discovery.sh@55 -- # xargs 00:27:15.725 15:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.725 15:08:01 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:15.725 15:08:01 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:15.725 15:08:01 -- common/autotest_common.sh@638 -- # local es=0 00:27:15.725 15:08:01 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:15.725 15:08:01 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:15.725 15:08:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:15.725 15:08:01 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:15.725 15:08:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:15.725 15:08:01 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:15.725 15:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.725 15:08:01 -- common/autotest_common.sh@10 -- # set +x 00:27:16.658 [2024-04-26 15:08:02.248137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.658 [2024-04-26 15:08:02.248439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.658 [2024-04-26 15:08:02.248475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x176d6e0 with addr=10.0.0.2, port=8010 00:27:16.658 [2024-04-26 15:08:02.248501] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:16.658 [2024-04-26 15:08:02.248523] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:16.658 [2024-04-26 15:08:02.248534] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:17.592 [2024-04-26 15:08:03.250608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.592 [2024-04-26 15:08:03.250871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.592 [2024-04-26 15:08:03.250901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x176d6e0 with addr=10.0.0.2, port=8010 00:27:17.592 [2024-04-26 15:08:03.250935] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:17.592 [2024-04-26 15:08:03.250951] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:17.592 [2024-04-26 15:08:03.250966] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:18.523 [2024-04-26 15:08:04.252667] bdev_nvme.c:6966:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:18.523 request: 00:27:18.523 { 00:27:18.523 "name": "nvme_second", 00:27:18.523 "trtype": "tcp", 00:27:18.523 "traddr": "10.0.0.2", 00:27:18.523 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:18.523 "adrfam": "ipv4", 00:27:18.523 "trsvcid": "8010", 00:27:18.523 "attach_timeout_ms": 3000, 00:27:18.523 "method": "bdev_nvme_start_discovery", 00:27:18.523 "req_id": 1 00:27:18.523 } 00:27:18.523 Got JSON-RPC error response 00:27:18.523 response: 00:27:18.523 { 00:27:18.523 "code": -110, 00:27:18.523 "message": "Connection timed out" 00:27:18.523 } 00:27:18.523 15:08:04 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:18.523 15:08:04 -- common/autotest_common.sh@641 -- # es=1 00:27:18.523 15:08:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:18.523 15:08:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:18.523 15:08:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:18.523 15:08:04 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:18.523 15:08:04 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:18.523 15:08:04 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:18.523 15:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.523 15:08:04 -- common/autotest_common.sh@10 -- # set +x 00:27:18.523 15:08:04 -- host/discovery.sh@67 -- # sort 00:27:18.523 15:08:04 -- host/discovery.sh@67 -- # xargs 00:27:18.781 15:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.781 15:08:04 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:18.781 15:08:04 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:18.781 15:08:04 -- host/discovery.sh@161 -- # kill 3876889 00:27:18.781 15:08:04 -- host/discovery.sh@162 -- # nvmftestfini 00:27:18.781 15:08:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:18.781 15:08:04 -- nvmf/common.sh@117 -- # sync 00:27:18.781 15:08:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:18.781 15:08:04 -- nvmf/common.sh@120 -- # set +e 00:27:18.781 15:08:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:18.781 15:08:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:18.781 rmmod nvme_tcp 00:27:18.781 rmmod nvme_fabrics 00:27:18.781 rmmod nvme_keyring 00:27:18.781 15:08:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:18.781 15:08:04 -- nvmf/common.sh@124 -- # set -e 00:27:18.781 15:08:04 -- nvmf/common.sh@125 -- # return 0 00:27:18.781 15:08:04 -- nvmf/common.sh@478 -- # '[' -n 3876755 ']' 00:27:18.781 15:08:04 -- nvmf/common.sh@479 -- # killprocess 3876755 00:27:18.781 15:08:04 -- common/autotest_common.sh@936 -- # '[' -z 3876755 ']' 00:27:18.781 15:08:04 -- common/autotest_common.sh@940 -- # kill -0 3876755 00:27:18.781 15:08:04 -- common/autotest_common.sh@941 -- # uname 00:27:18.781 15:08:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:18.781 15:08:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3876755 00:27:18.781 15:08:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:18.781 15:08:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:18.781 15:08:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3876755' 00:27:18.781 killing process with pid 3876755 00:27:18.781 15:08:04 -- common/autotest_common.sh@955 -- # kill 3876755 00:27:18.781 15:08:04 -- common/autotest_common.sh@960 -- # wait 3876755 00:27:19.039 15:08:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:19.039 15:08:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:19.039 15:08:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:19.039 15:08:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:19.039 15:08:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:19.039 15:08:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.039 15:08:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:19.039 15:08:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.940 15:08:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:20.940 00:27:20.940 real 0m13.228s 00:27:20.940 user 0m19.090s 00:27:20.940 sys 0m2.812s 00:27:20.940 15:08:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:20.940 15:08:06 -- common/autotest_common.sh@10 -- # set +x 00:27:20.940 ************************************ 00:27:20.940 END TEST nvmf_discovery 00:27:20.940 ************************************ 00:27:21.198 15:08:06 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:21.198 15:08:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:21.198 15:08:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:21.198 15:08:06 -- common/autotest_common.sh@10 -- # set +x 00:27:21.198 ************************************ 00:27:21.198 START TEST nvmf_discovery_remove_ifc 00:27:21.198 ************************************ 00:27:21.198 15:08:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:21.198 * Looking for test storage... 00:27:21.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:21.198 15:08:06 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:21.198 15:08:06 -- nvmf/common.sh@7 -- # uname -s 00:27:21.198 15:08:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.198 15:08:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.198 15:08:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.198 15:08:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.198 15:08:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.198 15:08:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.198 15:08:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.198 15:08:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.198 15:08:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.198 15:08:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.198 15:08:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:21.198 15:08:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:21.198 15:08:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.198 15:08:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.198 15:08:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:21.198 15:08:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.198 15:08:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:21.198 15:08:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.198 15:08:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.198 15:08:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.198 15:08:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.198 15:08:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.198 15:08:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.198 15:08:06 -- paths/export.sh@5 -- # export PATH 00:27:21.198 15:08:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.198 15:08:06 -- nvmf/common.sh@47 -- # : 0 00:27:21.198 15:08:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:21.198 15:08:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:21.198 15:08:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.198 15:08:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.198 15:08:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.198 15:08:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:21.198 15:08:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:21.198 15:08:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:21.198 15:08:06 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:21.198 15:08:06 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:21.198 15:08:06 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:21.198 15:08:06 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:21.199 15:08:06 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:21.199 15:08:06 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:21.199 15:08:06 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:21.199 15:08:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:21.199 15:08:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:21.199 15:08:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:21.199 15:08:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:21.199 15:08:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:21.199 15:08:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.199 15:08:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:21.199 15:08:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.199 15:08:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:21.199 15:08:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:21.199 15:08:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:21.199 15:08:06 -- common/autotest_common.sh@10 -- # set +x 00:27:23.101 15:08:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:23.101 15:08:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:23.101 15:08:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:23.101 15:08:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:23.101 15:08:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:23.101 15:08:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:23.101 15:08:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:23.101 15:08:08 -- nvmf/common.sh@295 -- # net_devs=() 00:27:23.101 15:08:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:23.101 15:08:08 -- nvmf/common.sh@296 -- # e810=() 00:27:23.101 15:08:08 -- nvmf/common.sh@296 -- # local -ga e810 00:27:23.101 15:08:08 -- nvmf/common.sh@297 -- # x722=() 00:27:23.101 15:08:08 -- nvmf/common.sh@297 -- # local -ga x722 00:27:23.101 15:08:08 -- nvmf/common.sh@298 -- # mlx=() 00:27:23.101 15:08:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:23.101 15:08:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.101 15:08:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.101 15:08:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.101 15:08:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.101 15:08:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.101 15:08:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.101 15:08:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.101 15:08:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.101 15:08:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.101 15:08:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.101 15:08:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.101 15:08:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:23.101 15:08:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:23.101 15:08:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:23.101 15:08:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:23.101 15:08:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:23.101 15:08:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:23.101 15:08:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.101 15:08:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:23.101 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:23.101 15:08:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.101 15:08:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.101 15:08:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.101 15:08:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.101 15:08:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.101 15:08:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.101 15:08:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:23.101 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:23.101 15:08:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.101 15:08:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.101 15:08:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.101 15:08:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.101 15:08:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.101 15:08:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:23.101 15:08:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:23.101 15:08:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:23.101 15:08:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.101 15:08:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.101 15:08:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:23.101 15:08:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.101 15:08:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:23.101 Found net devices under 0000:84:00.0: cvl_0_0 00:27:23.101 15:08:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.101 15:08:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.101 15:08:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.101 15:08:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:23.101 15:08:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.101 15:08:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:23.101 Found net devices under 0000:84:00.1: cvl_0_1 00:27:23.101 15:08:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.101 15:08:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:23.101 15:08:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:23.101 15:08:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:23.101 15:08:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:23.101 15:08:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:23.101 15:08:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:23.101 15:08:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.101 15:08:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:23.101 15:08:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:23.101 15:08:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:23.101 15:08:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:23.101 15:08:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:23.101 15:08:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:23.101 15:08:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:23.101 15:08:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:23.101 15:08:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:23.101 15:08:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:23.101 15:08:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:23.409 15:08:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:23.409 15:08:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:23.409 15:08:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:23.409 15:08:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:23.409 15:08:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:23.409 15:08:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:23.409 15:08:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:23.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:23.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:27:23.409 00:27:23.409 --- 10.0.0.2 ping statistics --- 00:27:23.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.409 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:27:23.409 15:08:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:23.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:23.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:27:23.409 00:27:23.409 --- 10.0.0.1 ping statistics --- 00:27:23.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.409 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:27:23.409 15:08:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:23.409 15:08:08 -- nvmf/common.sh@411 -- # return 0 00:27:23.409 15:08:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:23.409 15:08:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:23.409 15:08:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:23.409 15:08:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:23.409 15:08:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:23.409 15:08:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:23.409 15:08:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:23.409 15:08:08 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:23.409 15:08:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:23.409 15:08:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:23.409 15:08:08 -- common/autotest_common.sh@10 -- # set +x 00:27:23.409 15:08:08 -- nvmf/common.sh@470 -- # nvmfpid=3879950 00:27:23.409 15:08:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:23.409 15:08:08 -- nvmf/common.sh@471 -- # waitforlisten 3879950 00:27:23.409 15:08:08 -- common/autotest_common.sh@817 -- # '[' -z 3879950 ']' 00:27:23.409 15:08:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.409 15:08:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:23.409 15:08:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.409 15:08:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:23.409 15:08:08 -- common/autotest_common.sh@10 -- # set +x 00:27:23.409 [2024-04-26 15:08:08.985889] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:27:23.409 [2024-04-26 15:08:08.985980] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:23.409 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.409 [2024-04-26 15:08:09.025870] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:23.409 [2024-04-26 15:08:09.056499] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.669 [2024-04-26 15:08:09.146541] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:23.669 [2024-04-26 15:08:09.146604] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:23.669 [2024-04-26 15:08:09.146632] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:23.669 [2024-04-26 15:08:09.146654] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:23.669 [2024-04-26 15:08:09.146665] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:23.669 [2024-04-26 15:08:09.146694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.669 15:08:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:23.669 15:08:09 -- common/autotest_common.sh@850 -- # return 0 00:27:23.669 15:08:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:23.669 15:08:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:23.669 15:08:09 -- common/autotest_common.sh@10 -- # set +x 00:27:23.669 15:08:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:23.669 15:08:09 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:23.669 15:08:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.669 15:08:09 -- common/autotest_common.sh@10 -- # set +x 00:27:23.669 [2024-04-26 15:08:09.308186] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.669 [2024-04-26 15:08:09.316395] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:23.669 null0 00:27:23.669 [2024-04-26 15:08:09.348295] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:23.669 15:08:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.669 15:08:09 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3879973 00:27:23.669 15:08:09 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:23.669 15:08:09 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3879973 /tmp/host.sock 00:27:23.669 15:08:09 -- common/autotest_common.sh@817 -- # '[' -z 3879973 ']' 00:27:23.669 15:08:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:27:23.669 15:08:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:23.669 15:08:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:23.669 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:23.669 15:08:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:23.669 15:08:09 -- common/autotest_common.sh@10 -- # set +x 00:27:23.928 [2024-04-26 15:08:09.414247] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:27:23.928 [2024-04-26 15:08:09.414316] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3879973 ] 00:27:23.928 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.928 [2024-04-26 15:08:09.447501] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:23.928 [2024-04-26 15:08:09.478230] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.928 [2024-04-26 15:08:09.569338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.928 15:08:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:23.928 15:08:09 -- common/autotest_common.sh@850 -- # return 0 00:27:23.928 15:08:09 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:23.928 15:08:09 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:23.928 15:08:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.928 15:08:09 -- common/autotest_common.sh@10 -- # set +x 00:27:23.928 15:08:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.928 15:08:09 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:23.928 15:08:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.928 15:08:09 -- common/autotest_common.sh@10 -- # set +x 00:27:24.186 15:08:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.186 15:08:09 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:24.186 15:08:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.186 15:08:09 -- common/autotest_common.sh@10 -- # set +x 00:27:25.120 [2024-04-26 15:08:10.805734] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:25.120 [2024-04-26 15:08:10.805777] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:25.120 [2024-04-26 15:08:10.805798] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:25.378 [2024-04-26 15:08:10.935228] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:25.378 [2024-04-26 15:08:10.996770] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:25.378 [2024-04-26 15:08:10.996826] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:25.378 [2024-04-26 15:08:10.996864] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:25.378 [2024-04-26 15:08:10.996887] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:25.378 [2024-04-26 15:08:10.996926] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:25.378 15:08:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.378 15:08:10 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:25.378 15:08:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:25.378 15:08:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.378 15:08:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:25.378 15:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.378 15:08:11 -- common/autotest_common.sh@10 -- # set +x 00:27:25.379 15:08:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:25.379 15:08:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:25.379 15:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.379 15:08:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:25.379 15:08:11 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:25.379 [2024-04-26 15:08:11.044697] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x171c800 was disconnected and freed. delete nvme_qpair. 00:27:25.379 15:08:11 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:25.379 15:08:11 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:25.379 15:08:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:25.379 15:08:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.379 15:08:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:25.379 15:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.379 15:08:11 -- common/autotest_common.sh@10 -- # set +x 00:27:25.379 15:08:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:25.379 15:08:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:25.379 15:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.637 15:08:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:25.637 15:08:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:26.570 15:08:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:26.570 15:08:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.570 15:08:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:26.571 15:08:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.571 15:08:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:26.571 15:08:12 -- common/autotest_common.sh@10 -- # set +x 00:27:26.571 15:08:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:26.571 15:08:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.571 15:08:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:26.571 15:08:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:27.504 15:08:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:27.504 15:08:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.504 15:08:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:27.504 15:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:27.504 15:08:13 -- common/autotest_common.sh@10 -- # set +x 00:27:27.504 15:08:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:27.504 15:08:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:27.504 15:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:27.504 15:08:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:27.504 15:08:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:28.874 15:08:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:28.874 15:08:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.874 15:08:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:28.874 15:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:28.874 15:08:14 -- common/autotest_common.sh@10 -- # set +x 00:27:28.874 15:08:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:28.874 15:08:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:28.874 15:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:28.874 15:08:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:28.874 15:08:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:29.807 15:08:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:29.807 15:08:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:29.807 15:08:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:29.807 15:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.807 15:08:15 -- common/autotest_common.sh@10 -- # set +x 00:27:29.807 15:08:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:29.807 15:08:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:29.807 15:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.807 15:08:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:29.807 15:08:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:30.740 15:08:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:30.740 15:08:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.740 15:08:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:30.740 15:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:30.740 15:08:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:30.740 15:08:16 -- common/autotest_common.sh@10 -- # set +x 00:27:30.740 15:08:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:30.740 15:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:30.740 15:08:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:30.740 15:08:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:30.740 [2024-04-26 15:08:16.437855] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:30.740 [2024-04-26 15:08:16.437940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.740 [2024-04-26 15:08:16.437962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.740 [2024-04-26 15:08:16.437981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.740 [2024-04-26 15:08:16.437995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.740 [2024-04-26 15:08:16.438035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.740 [2024-04-26 15:08:16.438051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.740 [2024-04-26 15:08:16.438081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.740 [2024-04-26 15:08:16.438103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.740 [2024-04-26 15:08:16.438118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.740 [2024-04-26 15:08:16.438133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.740 [2024-04-26 15:08:16.438159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e2b40 is same with the state(5) to be set 00:27:30.740 [2024-04-26 15:08:16.447872] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e2b40 (9): Bad file descriptor 00:27:30.740 [2024-04-26 15:08:16.457918] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.673 15:08:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:31.673 15:08:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.673 15:08:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:31.673 15:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.673 15:08:17 -- common/autotest_common.sh@10 -- # set +x 00:27:31.673 15:08:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:31.673 15:08:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:31.931 [2024-04-26 15:08:17.482069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:32.864 [2024-04-26 15:08:18.506053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:32.864 [2024-04-26 15:08:18.506116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e2b40 with addr=10.0.0.2, port=4420 00:27:32.864 [2024-04-26 15:08:18.506142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e2b40 is same with the state(5) to be set 00:27:32.864 [2024-04-26 15:08:18.506639] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e2b40 (9): Bad file descriptor 00:27:32.864 [2024-04-26 15:08:18.506685] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.864 [2024-04-26 15:08:18.506729] bdev_nvme.c:6674:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:32.864 [2024-04-26 15:08:18.506781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:32.864 [2024-04-26 15:08:18.506805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.864 [2024-04-26 15:08:18.506826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:32.864 [2024-04-26 15:08:18.506842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.864 [2024-04-26 15:08:18.506859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:32.864 [2024-04-26 15:08:18.506875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.864 [2024-04-26 15:08:18.506891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:32.864 [2024-04-26 15:08:18.506908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.864 [2024-04-26 15:08:18.506925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:32.864 [2024-04-26 15:08:18.506941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.864 [2024-04-26 15:08:18.506957] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:32.864 [2024-04-26 15:08:18.507178] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e2f50 (9): Bad file descriptor 00:27:32.864 [2024-04-26 15:08:18.508195] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:32.864 [2024-04-26 15:08:18.508216] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:32.864 15:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.864 15:08:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:32.864 15:08:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:33.797 15:08:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:33.797 15:08:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.797 15:08:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:33.797 15:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:33.797 15:08:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:33.797 15:08:19 -- common/autotest_common.sh@10 -- # set +x 00:27:33.797 15:08:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:33.797 15:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:34.055 15:08:19 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:34.055 15:08:19 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:34.055 15:08:19 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:34.055 15:08:19 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:34.055 15:08:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:34.055 15:08:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:34.055 15:08:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:34.055 15:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:34.055 15:08:19 -- common/autotest_common.sh@10 -- # set +x 00:27:34.055 15:08:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:34.055 15:08:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:34.055 15:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:34.055 15:08:19 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:34.055 15:08:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:34.989 [2024-04-26 15:08:20.563214] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:34.989 [2024-04-26 15:08:20.563258] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:34.989 [2024-04-26 15:08:20.563283] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:34.990 [2024-04-26 15:08:20.650544] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:34.990 15:08:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:34.990 15:08:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:34.990 15:08:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:34.990 15:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:34.990 15:08:20 -- common/autotest_common.sh@10 -- # set +x 00:27:34.990 15:08:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:34.990 15:08:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:34.990 15:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:34.990 15:08:20 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:34.990 15:08:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:35.247 [2024-04-26 15:08:20.831894] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:35.247 [2024-04-26 15:08:20.831941] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:35.247 [2024-04-26 15:08:20.831987] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:35.247 [2024-04-26 15:08:20.832010] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:35.247 [2024-04-26 15:08:20.832049] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:35.247 [2024-04-26 15:08:20.840526] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x17271c0 was disconnected and freed. delete nvme_qpair. 00:27:36.215 15:08:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:36.215 15:08:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:36.215 15:08:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:36.215 15:08:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:36.215 15:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:36.215 15:08:21 -- common/autotest_common.sh@10 -- # set +x 00:27:36.215 15:08:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:36.215 15:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:36.215 15:08:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:36.215 15:08:21 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:36.215 15:08:21 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3879973 00:27:36.215 15:08:21 -- common/autotest_common.sh@936 -- # '[' -z 3879973 ']' 00:27:36.215 15:08:21 -- common/autotest_common.sh@940 -- # kill -0 3879973 00:27:36.215 15:08:21 -- common/autotest_common.sh@941 -- # uname 00:27:36.215 15:08:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:36.215 15:08:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3879973 00:27:36.215 15:08:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:36.215 15:08:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:36.215 15:08:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3879973' 00:27:36.215 killing process with pid 3879973 00:27:36.215 15:08:21 -- common/autotest_common.sh@955 -- # kill 3879973 00:27:36.215 15:08:21 -- common/autotest_common.sh@960 -- # wait 3879973 00:27:36.472 15:08:21 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:36.472 15:08:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:36.472 15:08:21 -- nvmf/common.sh@117 -- # sync 00:27:36.472 15:08:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:36.472 15:08:21 -- nvmf/common.sh@120 -- # set +e 00:27:36.472 15:08:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:36.472 15:08:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:36.472 rmmod nvme_tcp 00:27:36.472 rmmod nvme_fabrics 00:27:36.472 rmmod nvme_keyring 00:27:36.472 15:08:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:36.472 15:08:22 -- nvmf/common.sh@124 -- # set -e 00:27:36.472 15:08:22 -- nvmf/common.sh@125 -- # return 0 00:27:36.472 15:08:22 -- nvmf/common.sh@478 -- # '[' -n 3879950 ']' 00:27:36.472 15:08:22 -- nvmf/common.sh@479 -- # killprocess 3879950 00:27:36.472 15:08:22 -- common/autotest_common.sh@936 -- # '[' -z 3879950 ']' 00:27:36.472 15:08:22 -- common/autotest_common.sh@940 -- # kill -0 3879950 00:27:36.472 15:08:22 -- common/autotest_common.sh@941 -- # uname 00:27:36.472 15:08:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:36.472 15:08:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3879950 00:27:36.472 15:08:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:36.472 15:08:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:36.472 15:08:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3879950' 00:27:36.472 killing process with pid 3879950 00:27:36.472 15:08:22 -- common/autotest_common.sh@955 -- # kill 3879950 00:27:36.472 15:08:22 -- common/autotest_common.sh@960 -- # wait 3879950 00:27:36.729 15:08:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:36.729 15:08:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:36.729 15:08:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:36.729 15:08:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:36.729 15:08:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:36.729 15:08:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.729 15:08:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:36.729 15:08:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.674 15:08:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:38.674 00:27:38.674 real 0m17.538s 00:27:38.674 user 0m24.451s 00:27:38.674 sys 0m2.974s 00:27:38.674 15:08:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:38.674 15:08:24 -- common/autotest_common.sh@10 -- # set +x 00:27:38.674 ************************************ 00:27:38.674 END TEST nvmf_discovery_remove_ifc 00:27:38.674 ************************************ 00:27:38.674 15:08:24 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:38.674 15:08:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:38.674 15:08:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:38.674 15:08:24 -- common/autotest_common.sh@10 -- # set +x 00:27:38.932 ************************************ 00:27:38.932 START TEST nvmf_identify_kernel_target 00:27:38.932 ************************************ 00:27:38.932 15:08:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:38.932 * Looking for test storage... 00:27:38.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:38.932 15:08:24 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.932 15:08:24 -- nvmf/common.sh@7 -- # uname -s 00:27:38.932 15:08:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.932 15:08:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.932 15:08:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.932 15:08:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.932 15:08:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.932 15:08:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.932 15:08:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.932 15:08:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.932 15:08:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.932 15:08:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.932 15:08:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:38.932 15:08:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:38.932 15:08:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.932 15:08:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.932 15:08:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.932 15:08:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.932 15:08:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.932 15:08:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.932 15:08:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.932 15:08:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.932 15:08:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.932 15:08:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.932 15:08:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.932 15:08:24 -- paths/export.sh@5 -- # export PATH 00:27:38.932 15:08:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.932 15:08:24 -- nvmf/common.sh@47 -- # : 0 00:27:38.932 15:08:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:38.932 15:08:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:38.932 15:08:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.932 15:08:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.932 15:08:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.932 15:08:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:38.932 15:08:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:38.932 15:08:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:38.932 15:08:24 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:38.932 15:08:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:38.932 15:08:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.932 15:08:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:38.932 15:08:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:38.932 15:08:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:38.932 15:08:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.932 15:08:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:38.932 15:08:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.932 15:08:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:38.932 15:08:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:38.932 15:08:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:38.932 15:08:24 -- common/autotest_common.sh@10 -- # set +x 00:27:40.833 15:08:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:40.833 15:08:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:40.833 15:08:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:40.833 15:08:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:40.833 15:08:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:40.833 15:08:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:40.833 15:08:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:40.833 15:08:26 -- nvmf/common.sh@295 -- # net_devs=() 00:27:40.833 15:08:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:40.833 15:08:26 -- nvmf/common.sh@296 -- # e810=() 00:27:40.833 15:08:26 -- nvmf/common.sh@296 -- # local -ga e810 00:27:40.833 15:08:26 -- nvmf/common.sh@297 -- # x722=() 00:27:40.833 15:08:26 -- nvmf/common.sh@297 -- # local -ga x722 00:27:40.833 15:08:26 -- nvmf/common.sh@298 -- # mlx=() 00:27:40.833 15:08:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:40.833 15:08:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.833 15:08:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.833 15:08:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.833 15:08:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.833 15:08:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.833 15:08:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.833 15:08:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.833 15:08:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.833 15:08:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.833 15:08:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.833 15:08:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.833 15:08:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:40.833 15:08:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:40.833 15:08:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:40.833 15:08:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:40.833 15:08:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:40.833 15:08:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:40.833 15:08:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:40.833 15:08:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:40.833 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:40.833 15:08:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:40.833 15:08:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:40.833 15:08:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.833 15:08:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.833 15:08:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:40.833 15:08:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:40.833 15:08:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:40.833 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:40.833 15:08:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:40.833 15:08:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:40.833 15:08:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.833 15:08:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.833 15:08:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:40.833 15:08:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:40.833 15:08:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:40.833 15:08:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:40.833 15:08:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:40.834 15:08:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.834 15:08:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:40.834 15:08:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.834 15:08:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:40.834 Found net devices under 0000:84:00.0: cvl_0_0 00:27:40.834 15:08:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.834 15:08:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:40.834 15:08:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.834 15:08:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:40.834 15:08:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.834 15:08:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:40.834 Found net devices under 0000:84:00.1: cvl_0_1 00:27:40.834 15:08:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.834 15:08:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:40.834 15:08:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:40.834 15:08:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:40.834 15:08:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:40.834 15:08:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:40.834 15:08:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.834 15:08:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.834 15:08:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.834 15:08:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:40.834 15:08:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.834 15:08:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.834 15:08:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:40.834 15:08:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.834 15:08:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.834 15:08:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:40.834 15:08:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:40.834 15:08:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.834 15:08:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:41.092 15:08:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:41.092 15:08:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:41.092 15:08:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:41.092 15:08:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:41.092 15:08:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:41.092 15:08:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:41.092 15:08:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:41.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:27:41.092 00:27:41.092 --- 10.0.0.2 ping statistics --- 00:27:41.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.092 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:27:41.092 15:08:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:41.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:27:41.092 00:27:41.092 --- 10.0.0.1 ping statistics --- 00:27:41.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.092 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:27:41.092 15:08:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.092 15:08:26 -- nvmf/common.sh@411 -- # return 0 00:27:41.092 15:08:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:41.092 15:08:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.092 15:08:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:41.092 15:08:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:41.092 15:08:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.092 15:08:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:41.092 15:08:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:41.092 15:08:26 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:41.092 15:08:26 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:41.092 15:08:26 -- nvmf/common.sh@717 -- # local ip 00:27:41.092 15:08:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:41.092 15:08:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:41.092 15:08:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.092 15:08:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.092 15:08:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:41.092 15:08:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.092 15:08:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:41.092 15:08:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:41.093 15:08:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:41.093 15:08:26 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:41.093 15:08:26 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:41.093 15:08:26 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:41.093 15:08:26 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:27:41.093 15:08:26 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:41.093 15:08:26 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:41.093 15:08:26 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:41.093 15:08:26 -- nvmf/common.sh@628 -- # local block nvme 00:27:41.093 15:08:26 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:27:41.093 15:08:26 -- nvmf/common.sh@631 -- # modprobe nvmet 00:27:41.093 15:08:26 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:41.093 15:08:26 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:42.469 Waiting for block devices as requested 00:27:42.469 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:27:42.469 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:42.469 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:42.469 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:42.469 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:42.726 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:42.726 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:42.726 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:42.726 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:42.983 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:42.983 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:42.984 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:42.984 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:43.241 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:43.242 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:43.242 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:43.242 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:43.501 15:08:29 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:43.501 15:08:29 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:43.501 15:08:29 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:27:43.501 15:08:29 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:43.501 15:08:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:43.501 15:08:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:43.501 15:08:29 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:27:43.501 15:08:29 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:43.501 15:08:29 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:43.501 No valid GPT data, bailing 00:27:43.501 15:08:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:43.501 15:08:29 -- scripts/common.sh@391 -- # pt= 00:27:43.501 15:08:29 -- scripts/common.sh@392 -- # return 1 00:27:43.501 15:08:29 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:27:43.501 15:08:29 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:27:43.501 15:08:29 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:43.501 15:08:29 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:43.501 15:08:29 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:43.501 15:08:29 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:43.501 15:08:29 -- nvmf/common.sh@656 -- # echo 1 00:27:43.501 15:08:29 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:27:43.501 15:08:29 -- nvmf/common.sh@658 -- # echo 1 00:27:43.501 15:08:29 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:27:43.501 15:08:29 -- nvmf/common.sh@661 -- # echo tcp 00:27:43.501 15:08:29 -- nvmf/common.sh@662 -- # echo 4420 00:27:43.501 15:08:29 -- nvmf/common.sh@663 -- # echo ipv4 00:27:43.501 15:08:29 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:43.501 15:08:29 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:27:43.501 00:27:43.501 Discovery Log Number of Records 2, Generation counter 2 00:27:43.501 =====Discovery Log Entry 0====== 00:27:43.501 trtype: tcp 00:27:43.501 adrfam: ipv4 00:27:43.501 subtype: current discovery subsystem 00:27:43.501 treq: not specified, sq flow control disable supported 00:27:43.501 portid: 1 00:27:43.501 trsvcid: 4420 00:27:43.501 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:43.501 traddr: 10.0.0.1 00:27:43.501 eflags: none 00:27:43.501 sectype: none 00:27:43.501 =====Discovery Log Entry 1====== 00:27:43.501 trtype: tcp 00:27:43.501 adrfam: ipv4 00:27:43.501 subtype: nvme subsystem 00:27:43.501 treq: not specified, sq flow control disable supported 00:27:43.501 portid: 1 00:27:43.501 trsvcid: 4420 00:27:43.501 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:43.501 traddr: 10.0.0.1 00:27:43.501 eflags: none 00:27:43.501 sectype: none 00:27:43.501 15:08:29 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:43.501 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:43.501 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.501 ===================================================== 00:27:43.501 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:43.501 ===================================================== 00:27:43.501 Controller Capabilities/Features 00:27:43.501 ================================ 00:27:43.501 Vendor ID: 0000 00:27:43.501 Subsystem Vendor ID: 0000 00:27:43.501 Serial Number: 0e93f5c63d58e130be42 00:27:43.501 Model Number: Linux 00:27:43.501 Firmware Version: 6.7.0-68 00:27:43.501 Recommended Arb Burst: 0 00:27:43.501 IEEE OUI Identifier: 00 00 00 00:27:43.501 Multi-path I/O 00:27:43.501 May have multiple subsystem ports: No 00:27:43.501 May have multiple controllers: No 00:27:43.501 Associated with SR-IOV VF: No 00:27:43.501 Max Data Transfer Size: Unlimited 00:27:43.501 Max Number of Namespaces: 0 00:27:43.501 Max Number of I/O Queues: 1024 00:27:43.501 NVMe Specification Version (VS): 1.3 00:27:43.501 NVMe Specification Version (Identify): 1.3 00:27:43.501 Maximum Queue Entries: 1024 00:27:43.501 Contiguous Queues Required: No 00:27:43.501 Arbitration Mechanisms Supported 00:27:43.501 Weighted Round Robin: Not Supported 00:27:43.501 Vendor Specific: Not Supported 00:27:43.501 Reset Timeout: 7500 ms 00:27:43.501 Doorbell Stride: 4 bytes 00:27:43.501 NVM Subsystem Reset: Not Supported 00:27:43.501 Command Sets Supported 00:27:43.501 NVM Command Set: Supported 00:27:43.501 Boot Partition: Not Supported 00:27:43.501 Memory Page Size Minimum: 4096 bytes 00:27:43.501 Memory Page Size Maximum: 4096 bytes 00:27:43.501 Persistent Memory Region: Not Supported 00:27:43.501 Optional Asynchronous Events Supported 00:27:43.501 Namespace Attribute Notices: Not Supported 00:27:43.501 Firmware Activation Notices: Not Supported 00:27:43.501 ANA Change Notices: Not Supported 00:27:43.501 PLE Aggregate Log Change Notices: Not Supported 00:27:43.501 LBA Status Info Alert Notices: Not Supported 00:27:43.501 EGE Aggregate Log Change Notices: Not Supported 00:27:43.501 Normal NVM Subsystem Shutdown event: Not Supported 00:27:43.501 Zone Descriptor Change Notices: Not Supported 00:27:43.501 Discovery Log Change Notices: Supported 00:27:43.501 Controller Attributes 00:27:43.501 128-bit Host Identifier: Not Supported 00:27:43.501 Non-Operational Permissive Mode: Not Supported 00:27:43.501 NVM Sets: Not Supported 00:27:43.501 Read Recovery Levels: Not Supported 00:27:43.501 Endurance Groups: Not Supported 00:27:43.501 Predictable Latency Mode: Not Supported 00:27:43.501 Traffic Based Keep ALive: Not Supported 00:27:43.501 Namespace Granularity: Not Supported 00:27:43.501 SQ Associations: Not Supported 00:27:43.501 UUID List: Not Supported 00:27:43.501 Multi-Domain Subsystem: Not Supported 00:27:43.501 Fixed Capacity Management: Not Supported 00:27:43.501 Variable Capacity Management: Not Supported 00:27:43.501 Delete Endurance Group: Not Supported 00:27:43.501 Delete NVM Set: Not Supported 00:27:43.501 Extended LBA Formats Supported: Not Supported 00:27:43.501 Flexible Data Placement Supported: Not Supported 00:27:43.501 00:27:43.501 Controller Memory Buffer Support 00:27:43.501 ================================ 00:27:43.501 Supported: No 00:27:43.501 00:27:43.501 Persistent Memory Region Support 00:27:43.501 ================================ 00:27:43.501 Supported: No 00:27:43.501 00:27:43.501 Admin Command Set Attributes 00:27:43.501 ============================ 00:27:43.501 Security Send/Receive: Not Supported 00:27:43.501 Format NVM: Not Supported 00:27:43.501 Firmware Activate/Download: Not Supported 00:27:43.501 Namespace Management: Not Supported 00:27:43.501 Device Self-Test: Not Supported 00:27:43.501 Directives: Not Supported 00:27:43.501 NVMe-MI: Not Supported 00:27:43.501 Virtualization Management: Not Supported 00:27:43.501 Doorbell Buffer Config: Not Supported 00:27:43.501 Get LBA Status Capability: Not Supported 00:27:43.501 Command & Feature Lockdown Capability: Not Supported 00:27:43.501 Abort Command Limit: 1 00:27:43.501 Async Event Request Limit: 1 00:27:43.501 Number of Firmware Slots: N/A 00:27:43.501 Firmware Slot 1 Read-Only: N/A 00:27:43.501 Firmware Activation Without Reset: N/A 00:27:43.501 Multiple Update Detection Support: N/A 00:27:43.501 Firmware Update Granularity: No Information Provided 00:27:43.501 Per-Namespace SMART Log: No 00:27:43.501 Asymmetric Namespace Access Log Page: Not Supported 00:27:43.501 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:43.501 Command Effects Log Page: Not Supported 00:27:43.501 Get Log Page Extended Data: Supported 00:27:43.501 Telemetry Log Pages: Not Supported 00:27:43.501 Persistent Event Log Pages: Not Supported 00:27:43.501 Supported Log Pages Log Page: May Support 00:27:43.501 Commands Supported & Effects Log Page: Not Supported 00:27:43.501 Feature Identifiers & Effects Log Page:May Support 00:27:43.501 NVMe-MI Commands & Effects Log Page: May Support 00:27:43.501 Data Area 4 for Telemetry Log: Not Supported 00:27:43.501 Error Log Page Entries Supported: 1 00:27:43.502 Keep Alive: Not Supported 00:27:43.502 00:27:43.502 NVM Command Set Attributes 00:27:43.502 ========================== 00:27:43.502 Submission Queue Entry Size 00:27:43.502 Max: 1 00:27:43.502 Min: 1 00:27:43.502 Completion Queue Entry Size 00:27:43.502 Max: 1 00:27:43.502 Min: 1 00:27:43.502 Number of Namespaces: 0 00:27:43.502 Compare Command: Not Supported 00:27:43.502 Write Uncorrectable Command: Not Supported 00:27:43.502 Dataset Management Command: Not Supported 00:27:43.502 Write Zeroes Command: Not Supported 00:27:43.502 Set Features Save Field: Not Supported 00:27:43.502 Reservations: Not Supported 00:27:43.502 Timestamp: Not Supported 00:27:43.502 Copy: Not Supported 00:27:43.502 Volatile Write Cache: Not Present 00:27:43.502 Atomic Write Unit (Normal): 1 00:27:43.502 Atomic Write Unit (PFail): 1 00:27:43.502 Atomic Compare & Write Unit: 1 00:27:43.502 Fused Compare & Write: Not Supported 00:27:43.502 Scatter-Gather List 00:27:43.502 SGL Command Set: Supported 00:27:43.502 SGL Keyed: Not Supported 00:27:43.502 SGL Bit Bucket Descriptor: Not Supported 00:27:43.502 SGL Metadata Pointer: Not Supported 00:27:43.502 Oversized SGL: Not Supported 00:27:43.502 SGL Metadata Address: Not Supported 00:27:43.502 SGL Offset: Supported 00:27:43.502 Transport SGL Data Block: Not Supported 00:27:43.502 Replay Protected Memory Block: Not Supported 00:27:43.502 00:27:43.502 Firmware Slot Information 00:27:43.502 ========================= 00:27:43.502 Active slot: 0 00:27:43.502 00:27:43.502 00:27:43.502 Error Log 00:27:43.502 ========= 00:27:43.502 00:27:43.502 Active Namespaces 00:27:43.502 ================= 00:27:43.502 Discovery Log Page 00:27:43.502 ================== 00:27:43.502 Generation Counter: 2 00:27:43.502 Number of Records: 2 00:27:43.502 Record Format: 0 00:27:43.502 00:27:43.502 Discovery Log Entry 0 00:27:43.502 ---------------------- 00:27:43.502 Transport Type: 3 (TCP) 00:27:43.502 Address Family: 1 (IPv4) 00:27:43.502 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:43.502 Entry Flags: 00:27:43.502 Duplicate Returned Information: 0 00:27:43.502 Explicit Persistent Connection Support for Discovery: 0 00:27:43.502 Transport Requirements: 00:27:43.502 Secure Channel: Not Specified 00:27:43.502 Port ID: 1 (0x0001) 00:27:43.502 Controller ID: 65535 (0xffff) 00:27:43.502 Admin Max SQ Size: 32 00:27:43.502 Transport Service Identifier: 4420 00:27:43.502 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:43.502 Transport Address: 10.0.0.1 00:27:43.502 Discovery Log Entry 1 00:27:43.502 ---------------------- 00:27:43.502 Transport Type: 3 (TCP) 00:27:43.502 Address Family: 1 (IPv4) 00:27:43.502 Subsystem Type: 2 (NVM Subsystem) 00:27:43.502 Entry Flags: 00:27:43.502 Duplicate Returned Information: 0 00:27:43.502 Explicit Persistent Connection Support for Discovery: 0 00:27:43.502 Transport Requirements: 00:27:43.502 Secure Channel: Not Specified 00:27:43.502 Port ID: 1 (0x0001) 00:27:43.502 Controller ID: 65535 (0xffff) 00:27:43.502 Admin Max SQ Size: 32 00:27:43.502 Transport Service Identifier: 4420 00:27:43.502 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:43.502 Transport Address: 10.0.0.1 00:27:43.502 15:08:29 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:43.502 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.762 get_feature(0x01) failed 00:27:43.762 get_feature(0x02) failed 00:27:43.762 get_feature(0x04) failed 00:27:43.762 ===================================================== 00:27:43.762 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:43.762 ===================================================== 00:27:43.762 Controller Capabilities/Features 00:27:43.762 ================================ 00:27:43.762 Vendor ID: 0000 00:27:43.762 Subsystem Vendor ID: 0000 00:27:43.762 Serial Number: 83d6374a0bff5b53cb08 00:27:43.762 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:43.762 Firmware Version: 6.7.0-68 00:27:43.762 Recommended Arb Burst: 6 00:27:43.762 IEEE OUI Identifier: 00 00 00 00:27:43.762 Multi-path I/O 00:27:43.762 May have multiple subsystem ports: Yes 00:27:43.762 May have multiple controllers: Yes 00:27:43.762 Associated with SR-IOV VF: No 00:27:43.762 Max Data Transfer Size: Unlimited 00:27:43.762 Max Number of Namespaces: 1024 00:27:43.762 Max Number of I/O Queues: 128 00:27:43.762 NVMe Specification Version (VS): 1.3 00:27:43.762 NVMe Specification Version (Identify): 1.3 00:27:43.762 Maximum Queue Entries: 1024 00:27:43.762 Contiguous Queues Required: No 00:27:43.762 Arbitration Mechanisms Supported 00:27:43.762 Weighted Round Robin: Not Supported 00:27:43.762 Vendor Specific: Not Supported 00:27:43.762 Reset Timeout: 7500 ms 00:27:43.762 Doorbell Stride: 4 bytes 00:27:43.762 NVM Subsystem Reset: Not Supported 00:27:43.762 Command Sets Supported 00:27:43.762 NVM Command Set: Supported 00:27:43.762 Boot Partition: Not Supported 00:27:43.762 Memory Page Size Minimum: 4096 bytes 00:27:43.762 Memory Page Size Maximum: 4096 bytes 00:27:43.762 Persistent Memory Region: Not Supported 00:27:43.762 Optional Asynchronous Events Supported 00:27:43.762 Namespace Attribute Notices: Supported 00:27:43.762 Firmware Activation Notices: Not Supported 00:27:43.762 ANA Change Notices: Supported 00:27:43.762 PLE Aggregate Log Change Notices: Not Supported 00:27:43.762 LBA Status Info Alert Notices: Not Supported 00:27:43.762 EGE Aggregate Log Change Notices: Not Supported 00:27:43.762 Normal NVM Subsystem Shutdown event: Not Supported 00:27:43.762 Zone Descriptor Change Notices: Not Supported 00:27:43.762 Discovery Log Change Notices: Not Supported 00:27:43.762 Controller Attributes 00:27:43.762 128-bit Host Identifier: Supported 00:27:43.762 Non-Operational Permissive Mode: Not Supported 00:27:43.762 NVM Sets: Not Supported 00:27:43.762 Read Recovery Levels: Not Supported 00:27:43.762 Endurance Groups: Not Supported 00:27:43.762 Predictable Latency Mode: Not Supported 00:27:43.762 Traffic Based Keep ALive: Supported 00:27:43.762 Namespace Granularity: Not Supported 00:27:43.762 SQ Associations: Not Supported 00:27:43.762 UUID List: Not Supported 00:27:43.762 Multi-Domain Subsystem: Not Supported 00:27:43.762 Fixed Capacity Management: Not Supported 00:27:43.762 Variable Capacity Management: Not Supported 00:27:43.762 Delete Endurance Group: Not Supported 00:27:43.762 Delete NVM Set: Not Supported 00:27:43.762 Extended LBA Formats Supported: Not Supported 00:27:43.762 Flexible Data Placement Supported: Not Supported 00:27:43.762 00:27:43.762 Controller Memory Buffer Support 00:27:43.762 ================================ 00:27:43.762 Supported: No 00:27:43.762 00:27:43.762 Persistent Memory Region Support 00:27:43.762 ================================ 00:27:43.762 Supported: No 00:27:43.762 00:27:43.762 Admin Command Set Attributes 00:27:43.762 ============================ 00:27:43.762 Security Send/Receive: Not Supported 00:27:43.762 Format NVM: Not Supported 00:27:43.762 Firmware Activate/Download: Not Supported 00:27:43.762 Namespace Management: Not Supported 00:27:43.762 Device Self-Test: Not Supported 00:27:43.762 Directives: Not Supported 00:27:43.762 NVMe-MI: Not Supported 00:27:43.762 Virtualization Management: Not Supported 00:27:43.762 Doorbell Buffer Config: Not Supported 00:27:43.762 Get LBA Status Capability: Not Supported 00:27:43.762 Command & Feature Lockdown Capability: Not Supported 00:27:43.762 Abort Command Limit: 4 00:27:43.762 Async Event Request Limit: 4 00:27:43.762 Number of Firmware Slots: N/A 00:27:43.762 Firmware Slot 1 Read-Only: N/A 00:27:43.762 Firmware Activation Without Reset: N/A 00:27:43.762 Multiple Update Detection Support: N/A 00:27:43.762 Firmware Update Granularity: No Information Provided 00:27:43.762 Per-Namespace SMART Log: Yes 00:27:43.762 Asymmetric Namespace Access Log Page: Supported 00:27:43.762 ANA Transition Time : 10 sec 00:27:43.762 00:27:43.762 Asymmetric Namespace Access Capabilities 00:27:43.762 ANA Optimized State : Supported 00:27:43.762 ANA Non-Optimized State : Supported 00:27:43.762 ANA Inaccessible State : Supported 00:27:43.762 ANA Persistent Loss State : Supported 00:27:43.762 ANA Change State : Supported 00:27:43.762 ANAGRPID is not changed : No 00:27:43.762 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:43.762 00:27:43.762 ANA Group Identifier Maximum : 128 00:27:43.762 Number of ANA Group Identifiers : 128 00:27:43.762 Max Number of Allowed Namespaces : 1024 00:27:43.762 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:43.762 Command Effects Log Page: Supported 00:27:43.762 Get Log Page Extended Data: Supported 00:27:43.762 Telemetry Log Pages: Not Supported 00:27:43.762 Persistent Event Log Pages: Not Supported 00:27:43.762 Supported Log Pages Log Page: May Support 00:27:43.762 Commands Supported & Effects Log Page: Not Supported 00:27:43.762 Feature Identifiers & Effects Log Page:May Support 00:27:43.762 NVMe-MI Commands & Effects Log Page: May Support 00:27:43.762 Data Area 4 for Telemetry Log: Not Supported 00:27:43.762 Error Log Page Entries Supported: 128 00:27:43.762 Keep Alive: Supported 00:27:43.762 Keep Alive Granularity: 1000 ms 00:27:43.762 00:27:43.762 NVM Command Set Attributes 00:27:43.762 ========================== 00:27:43.762 Submission Queue Entry Size 00:27:43.762 Max: 64 00:27:43.762 Min: 64 00:27:43.762 Completion Queue Entry Size 00:27:43.762 Max: 16 00:27:43.762 Min: 16 00:27:43.762 Number of Namespaces: 1024 00:27:43.762 Compare Command: Not Supported 00:27:43.763 Write Uncorrectable Command: Not Supported 00:27:43.763 Dataset Management Command: Supported 00:27:43.763 Write Zeroes Command: Supported 00:27:43.763 Set Features Save Field: Not Supported 00:27:43.763 Reservations: Not Supported 00:27:43.763 Timestamp: Not Supported 00:27:43.763 Copy: Not Supported 00:27:43.763 Volatile Write Cache: Present 00:27:43.763 Atomic Write Unit (Normal): 1 00:27:43.763 Atomic Write Unit (PFail): 1 00:27:43.763 Atomic Compare & Write Unit: 1 00:27:43.763 Fused Compare & Write: Not Supported 00:27:43.763 Scatter-Gather List 00:27:43.763 SGL Command Set: Supported 00:27:43.763 SGL Keyed: Not Supported 00:27:43.763 SGL Bit Bucket Descriptor: Not Supported 00:27:43.763 SGL Metadata Pointer: Not Supported 00:27:43.763 Oversized SGL: Not Supported 00:27:43.763 SGL Metadata Address: Not Supported 00:27:43.763 SGL Offset: Supported 00:27:43.763 Transport SGL Data Block: Not Supported 00:27:43.763 Replay Protected Memory Block: Not Supported 00:27:43.763 00:27:43.763 Firmware Slot Information 00:27:43.763 ========================= 00:27:43.763 Active slot: 0 00:27:43.763 00:27:43.763 Asymmetric Namespace Access 00:27:43.763 =========================== 00:27:43.763 Change Count : 0 00:27:43.763 Number of ANA Group Descriptors : 1 00:27:43.763 ANA Group Descriptor : 0 00:27:43.763 ANA Group ID : 1 00:27:43.763 Number of NSID Values : 1 00:27:43.763 Change Count : 0 00:27:43.763 ANA State : 1 00:27:43.763 Namespace Identifier : 1 00:27:43.763 00:27:43.763 Commands Supported and Effects 00:27:43.763 ============================== 00:27:43.763 Admin Commands 00:27:43.763 -------------- 00:27:43.763 Get Log Page (02h): Supported 00:27:43.763 Identify (06h): Supported 00:27:43.763 Abort (08h): Supported 00:27:43.763 Set Features (09h): Supported 00:27:43.763 Get Features (0Ah): Supported 00:27:43.763 Asynchronous Event Request (0Ch): Supported 00:27:43.763 Keep Alive (18h): Supported 00:27:43.763 I/O Commands 00:27:43.763 ------------ 00:27:43.763 Flush (00h): Supported 00:27:43.763 Write (01h): Supported LBA-Change 00:27:43.763 Read (02h): Supported 00:27:43.763 Write Zeroes (08h): Supported LBA-Change 00:27:43.763 Dataset Management (09h): Supported 00:27:43.763 00:27:43.763 Error Log 00:27:43.763 ========= 00:27:43.763 Entry: 0 00:27:43.763 Error Count: 0x3 00:27:43.763 Submission Queue Id: 0x0 00:27:43.763 Command Id: 0x5 00:27:43.763 Phase Bit: 0 00:27:43.763 Status Code: 0x2 00:27:43.763 Status Code Type: 0x0 00:27:43.763 Do Not Retry: 1 00:27:43.763 Error Location: 0x28 00:27:43.763 LBA: 0x0 00:27:43.763 Namespace: 0x0 00:27:43.763 Vendor Log Page: 0x0 00:27:43.763 ----------- 00:27:43.763 Entry: 1 00:27:43.763 Error Count: 0x2 00:27:43.763 Submission Queue Id: 0x0 00:27:43.763 Command Id: 0x5 00:27:43.763 Phase Bit: 0 00:27:43.763 Status Code: 0x2 00:27:43.763 Status Code Type: 0x0 00:27:43.763 Do Not Retry: 1 00:27:43.763 Error Location: 0x28 00:27:43.763 LBA: 0x0 00:27:43.763 Namespace: 0x0 00:27:43.763 Vendor Log Page: 0x0 00:27:43.763 ----------- 00:27:43.763 Entry: 2 00:27:43.763 Error Count: 0x1 00:27:43.763 Submission Queue Id: 0x0 00:27:43.763 Command Id: 0x4 00:27:43.763 Phase Bit: 0 00:27:43.763 Status Code: 0x2 00:27:43.763 Status Code Type: 0x0 00:27:43.763 Do Not Retry: 1 00:27:43.763 Error Location: 0x28 00:27:43.763 LBA: 0x0 00:27:43.763 Namespace: 0x0 00:27:43.763 Vendor Log Page: 0x0 00:27:43.763 00:27:43.763 Number of Queues 00:27:43.763 ================ 00:27:43.763 Number of I/O Submission Queues: 128 00:27:43.763 Number of I/O Completion Queues: 128 00:27:43.763 00:27:43.763 ZNS Specific Controller Data 00:27:43.763 ============================ 00:27:43.763 Zone Append Size Limit: 0 00:27:43.763 00:27:43.763 00:27:43.763 Active Namespaces 00:27:43.763 ================= 00:27:43.763 get_feature(0x05) failed 00:27:43.763 Namespace ID:1 00:27:43.763 Command Set Identifier: NVM (00h) 00:27:43.763 Deallocate: Supported 00:27:43.763 Deallocated/Unwritten Error: Not Supported 00:27:43.763 Deallocated Read Value: Unknown 00:27:43.763 Deallocate in Write Zeroes: Not Supported 00:27:43.763 Deallocated Guard Field: 0xFFFF 00:27:43.763 Flush: Supported 00:27:43.763 Reservation: Not Supported 00:27:43.763 Namespace Sharing Capabilities: Multiple Controllers 00:27:43.763 Size (in LBAs): 1953525168 (931GiB) 00:27:43.763 Capacity (in LBAs): 1953525168 (931GiB) 00:27:43.763 Utilization (in LBAs): 1953525168 (931GiB) 00:27:43.763 UUID: d33895cf-231b-4a53-8d30-139e3236d55b 00:27:43.763 Thin Provisioning: Not Supported 00:27:43.763 Per-NS Atomic Units: Yes 00:27:43.763 Atomic Boundary Size (Normal): 0 00:27:43.763 Atomic Boundary Size (PFail): 0 00:27:43.763 Atomic Boundary Offset: 0 00:27:43.763 NGUID/EUI64 Never Reused: No 00:27:43.763 ANA group ID: 1 00:27:43.763 Namespace Write Protected: No 00:27:43.763 Number of LBA Formats: 1 00:27:43.763 Current LBA Format: LBA Format #00 00:27:43.763 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:43.763 00:27:43.763 15:08:29 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:43.763 15:08:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:43.763 15:08:29 -- nvmf/common.sh@117 -- # sync 00:27:43.763 15:08:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:43.763 15:08:29 -- nvmf/common.sh@120 -- # set +e 00:27:43.763 15:08:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:43.763 15:08:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:43.763 rmmod nvme_tcp 00:27:43.763 rmmod nvme_fabrics 00:27:43.763 15:08:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:43.763 15:08:29 -- nvmf/common.sh@124 -- # set -e 00:27:43.763 15:08:29 -- nvmf/common.sh@125 -- # return 0 00:27:43.763 15:08:29 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:27:43.763 15:08:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:43.763 15:08:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:43.763 15:08:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:43.763 15:08:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:43.763 15:08:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:43.763 15:08:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.763 15:08:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:43.763 15:08:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.665 15:08:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:45.665 15:08:31 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:45.665 15:08:31 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:45.665 15:08:31 -- nvmf/common.sh@675 -- # echo 0 00:27:45.665 15:08:31 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:45.665 15:08:31 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:45.665 15:08:31 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:45.665 15:08:31 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:45.665 15:08:31 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:27:45.665 15:08:31 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:27:45.665 15:08:31 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:47.039 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:47.039 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:47.039 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:47.039 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:47.039 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:47.039 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:47.039 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:47.039 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:47.039 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:47.039 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:47.039 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:47.039 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:47.039 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:47.039 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:47.039 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:47.039 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:48.032 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:27:48.032 00:27:48.032 real 0m9.241s 00:27:48.032 user 0m1.999s 00:27:48.032 sys 0m3.353s 00:27:48.032 15:08:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:48.032 15:08:33 -- common/autotest_common.sh@10 -- # set +x 00:27:48.032 ************************************ 00:27:48.032 END TEST nvmf_identify_kernel_target 00:27:48.033 ************************************ 00:27:48.033 15:08:33 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:48.033 15:08:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:48.033 15:08:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:48.033 15:08:33 -- common/autotest_common.sh@10 -- # set +x 00:27:48.290 ************************************ 00:27:48.290 START TEST nvmf_auth 00:27:48.290 ************************************ 00:27:48.290 15:08:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:48.290 * Looking for test storage... 00:27:48.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:48.290 15:08:33 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:48.290 15:08:33 -- nvmf/common.sh@7 -- # uname -s 00:27:48.291 15:08:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:48.291 15:08:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:48.291 15:08:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:48.291 15:08:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:48.291 15:08:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:48.291 15:08:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:48.291 15:08:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:48.291 15:08:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:48.291 15:08:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:48.291 15:08:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:48.291 15:08:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:48.291 15:08:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:48.291 15:08:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:48.291 15:08:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:48.291 15:08:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:48.291 15:08:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:48.291 15:08:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:48.291 15:08:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:48.291 15:08:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:48.291 15:08:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:48.291 15:08:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.291 15:08:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.291 15:08:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.291 15:08:33 -- paths/export.sh@5 -- # export PATH 00:27:48.291 15:08:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.291 15:08:33 -- nvmf/common.sh@47 -- # : 0 00:27:48.291 15:08:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:48.291 15:08:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:48.291 15:08:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:48.291 15:08:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:48.291 15:08:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:48.291 15:08:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:48.291 15:08:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:48.291 15:08:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:48.291 15:08:33 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:48.291 15:08:33 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:48.291 15:08:33 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:48.291 15:08:33 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:48.291 15:08:33 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:48.291 15:08:33 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:48.291 15:08:33 -- host/auth.sh@21 -- # keys=() 00:27:48.291 15:08:33 -- host/auth.sh@77 -- # nvmftestinit 00:27:48.291 15:08:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:48.291 15:08:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:48.291 15:08:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:48.291 15:08:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:48.291 15:08:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:48.291 15:08:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.291 15:08:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:48.291 15:08:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.291 15:08:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:48.291 15:08:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:48.291 15:08:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:48.291 15:08:33 -- common/autotest_common.sh@10 -- # set +x 00:27:50.192 15:08:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:50.192 15:08:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:50.192 15:08:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:50.192 15:08:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:50.192 15:08:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:50.192 15:08:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:50.192 15:08:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:50.192 15:08:35 -- nvmf/common.sh@295 -- # net_devs=() 00:27:50.192 15:08:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:50.192 15:08:35 -- nvmf/common.sh@296 -- # e810=() 00:27:50.192 15:08:35 -- nvmf/common.sh@296 -- # local -ga e810 00:27:50.192 15:08:35 -- nvmf/common.sh@297 -- # x722=() 00:27:50.192 15:08:35 -- nvmf/common.sh@297 -- # local -ga x722 00:27:50.192 15:08:35 -- nvmf/common.sh@298 -- # mlx=() 00:27:50.192 15:08:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:50.192 15:08:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:50.192 15:08:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:50.192 15:08:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:50.192 15:08:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:50.192 15:08:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:50.192 15:08:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:50.192 15:08:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:50.192 15:08:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:50.192 15:08:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:50.192 15:08:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:50.192 15:08:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:50.192 15:08:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:50.192 15:08:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:50.192 15:08:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:50.192 15:08:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:50.192 15:08:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:50.192 15:08:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:50.192 15:08:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:50.192 15:08:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:50.192 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:50.192 15:08:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:50.192 15:08:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:50.192 15:08:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.192 15:08:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.192 15:08:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:50.192 15:08:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:50.192 15:08:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:50.192 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:50.192 15:08:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:50.192 15:08:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:50.192 15:08:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.192 15:08:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.192 15:08:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:50.192 15:08:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:50.192 15:08:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:50.192 15:08:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:50.192 15:08:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:50.192 15:08:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.192 15:08:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:50.192 15:08:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.192 15:08:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:50.192 Found net devices under 0000:84:00.0: cvl_0_0 00:27:50.192 15:08:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.192 15:08:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:50.192 15:08:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.192 15:08:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:50.192 15:08:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.192 15:08:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:50.192 Found net devices under 0000:84:00.1: cvl_0_1 00:27:50.192 15:08:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.192 15:08:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:50.192 15:08:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:50.192 15:08:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:50.192 15:08:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:50.192 15:08:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:50.192 15:08:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:50.192 15:08:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:50.192 15:08:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:50.192 15:08:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:50.192 15:08:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:50.192 15:08:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:50.192 15:08:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:50.192 15:08:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:50.192 15:08:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:50.192 15:08:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:50.192 15:08:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:50.192 15:08:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:50.192 15:08:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:50.192 15:08:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:50.192 15:08:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:50.192 15:08:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:50.192 15:08:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:50.450 15:08:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:50.450 15:08:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:50.450 15:08:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:50.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:50.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:27:50.450 00:27:50.450 --- 10.0.0.2 ping statistics --- 00:27:50.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.450 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:27:50.450 15:08:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:50.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:50.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:27:50.450 00:27:50.450 --- 10.0.0.1 ping statistics --- 00:27:50.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.450 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:27:50.450 15:08:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:50.450 15:08:35 -- nvmf/common.sh@411 -- # return 0 00:27:50.450 15:08:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:50.450 15:08:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:50.450 15:08:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:50.450 15:08:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:50.450 15:08:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:50.450 15:08:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:50.450 15:08:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:50.450 15:08:35 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:27:50.450 15:08:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:50.450 15:08:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:50.450 15:08:36 -- common/autotest_common.sh@10 -- # set +x 00:27:50.450 15:08:36 -- nvmf/common.sh@470 -- # nvmfpid=3887218 00:27:50.450 15:08:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:50.450 15:08:36 -- nvmf/common.sh@471 -- # waitforlisten 3887218 00:27:50.450 15:08:36 -- common/autotest_common.sh@817 -- # '[' -z 3887218 ']' 00:27:50.450 15:08:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.450 15:08:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:50.450 15:08:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.450 15:08:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:50.450 15:08:36 -- common/autotest_common.sh@10 -- # set +x 00:27:50.709 15:08:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:50.709 15:08:36 -- common/autotest_common.sh@850 -- # return 0 00:27:50.709 15:08:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:50.709 15:08:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:50.709 15:08:36 -- common/autotest_common.sh@10 -- # set +x 00:27:50.709 15:08:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:50.709 15:08:36 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:50.709 15:08:36 -- host/auth.sh@81 -- # gen_key null 32 00:27:50.709 15:08:36 -- host/auth.sh@53 -- # local digest len file key 00:27:50.709 15:08:36 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:50.709 15:08:36 -- host/auth.sh@54 -- # local -A digests 00:27:50.709 15:08:36 -- host/auth.sh@56 -- # digest=null 00:27:50.709 15:08:36 -- host/auth.sh@56 -- # len=32 00:27:50.709 15:08:36 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:50.709 15:08:36 -- host/auth.sh@57 -- # key=414e38efc6c1b4f269532b49b1dad2e4 00:27:50.709 15:08:36 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:27:50.709 15:08:36 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.uuV 00:27:50.709 15:08:36 -- host/auth.sh@59 -- # format_dhchap_key 414e38efc6c1b4f269532b49b1dad2e4 0 00:27:50.709 15:08:36 -- nvmf/common.sh@708 -- # format_key DHHC-1 414e38efc6c1b4f269532b49b1dad2e4 0 00:27:50.709 15:08:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:50.709 15:08:36 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:50.709 15:08:36 -- nvmf/common.sh@693 -- # key=414e38efc6c1b4f269532b49b1dad2e4 00:27:50.709 15:08:36 -- nvmf/common.sh@693 -- # digest=0 00:27:50.709 15:08:36 -- nvmf/common.sh@694 -- # python - 00:27:50.709 15:08:36 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.uuV 00:27:50.709 15:08:36 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.uuV 00:27:50.709 15:08:36 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.uuV 00:27:50.709 15:08:36 -- host/auth.sh@82 -- # gen_key null 48 00:27:50.709 15:08:36 -- host/auth.sh@53 -- # local digest len file key 00:27:50.709 15:08:36 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:50.709 15:08:36 -- host/auth.sh@54 -- # local -A digests 00:27:50.709 15:08:36 -- host/auth.sh@56 -- # digest=null 00:27:50.709 15:08:36 -- host/auth.sh@56 -- # len=48 00:27:50.709 15:08:36 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:50.709 15:08:36 -- host/auth.sh@57 -- # key=9da32aee09ef55c87a1206c1a89269c40bb50e0ef25a50c6 00:27:50.709 15:08:36 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:27:50.709 15:08:36 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.zuB 00:27:50.709 15:08:36 -- host/auth.sh@59 -- # format_dhchap_key 9da32aee09ef55c87a1206c1a89269c40bb50e0ef25a50c6 0 00:27:50.709 15:08:36 -- nvmf/common.sh@708 -- # format_key DHHC-1 9da32aee09ef55c87a1206c1a89269c40bb50e0ef25a50c6 0 00:27:50.709 15:08:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:50.709 15:08:36 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:50.709 15:08:36 -- nvmf/common.sh@693 -- # key=9da32aee09ef55c87a1206c1a89269c40bb50e0ef25a50c6 00:27:50.709 15:08:36 -- nvmf/common.sh@693 -- # digest=0 00:27:50.709 15:08:36 -- nvmf/common.sh@694 -- # python - 00:27:50.709 15:08:36 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.zuB 00:27:50.709 15:08:36 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.zuB 00:27:50.709 15:08:36 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.zuB 00:27:50.709 15:08:36 -- host/auth.sh@83 -- # gen_key sha256 32 00:27:50.709 15:08:36 -- host/auth.sh@53 -- # local digest len file key 00:27:50.709 15:08:36 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:50.709 15:08:36 -- host/auth.sh@54 -- # local -A digests 00:27:50.709 15:08:36 -- host/auth.sh@56 -- # digest=sha256 00:27:50.709 15:08:36 -- host/auth.sh@56 -- # len=32 00:27:50.709 15:08:36 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:50.709 15:08:36 -- host/auth.sh@57 -- # key=1126a78d20fdbf01559ddb8d8d5bb7d1 00:27:50.709 15:08:36 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:27:50.709 15:08:36 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.XZV 00:27:50.709 15:08:36 -- host/auth.sh@59 -- # format_dhchap_key 1126a78d20fdbf01559ddb8d8d5bb7d1 1 00:27:50.709 15:08:36 -- nvmf/common.sh@708 -- # format_key DHHC-1 1126a78d20fdbf01559ddb8d8d5bb7d1 1 00:27:50.709 15:08:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:50.709 15:08:36 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:50.709 15:08:36 -- nvmf/common.sh@693 -- # key=1126a78d20fdbf01559ddb8d8d5bb7d1 00:27:50.709 15:08:36 -- nvmf/common.sh@693 -- # digest=1 00:27:50.709 15:08:36 -- nvmf/common.sh@694 -- # python - 00:27:50.968 15:08:36 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.XZV 00:27:50.968 15:08:36 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.XZV 00:27:50.968 15:08:36 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.XZV 00:27:50.968 15:08:36 -- host/auth.sh@84 -- # gen_key sha384 48 00:27:50.968 15:08:36 -- host/auth.sh@53 -- # local digest len file key 00:27:50.968 15:08:36 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:50.968 15:08:36 -- host/auth.sh@54 -- # local -A digests 00:27:50.968 15:08:36 -- host/auth.sh@56 -- # digest=sha384 00:27:50.968 15:08:36 -- host/auth.sh@56 -- # len=48 00:27:50.968 15:08:36 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:50.968 15:08:36 -- host/auth.sh@57 -- # key=c7f11de9a74e34c4d216188ec59a19882030e7b1b6e01182 00:27:50.968 15:08:36 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:27:50.968 15:08:36 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.CIv 00:27:50.968 15:08:36 -- host/auth.sh@59 -- # format_dhchap_key c7f11de9a74e34c4d216188ec59a19882030e7b1b6e01182 2 00:27:50.968 15:08:36 -- nvmf/common.sh@708 -- # format_key DHHC-1 c7f11de9a74e34c4d216188ec59a19882030e7b1b6e01182 2 00:27:50.968 15:08:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:50.968 15:08:36 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:50.968 15:08:36 -- nvmf/common.sh@693 -- # key=c7f11de9a74e34c4d216188ec59a19882030e7b1b6e01182 00:27:50.968 15:08:36 -- nvmf/common.sh@693 -- # digest=2 00:27:50.968 15:08:36 -- nvmf/common.sh@694 -- # python - 00:27:50.968 15:08:36 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.CIv 00:27:50.968 15:08:36 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.CIv 00:27:50.968 15:08:36 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.CIv 00:27:50.968 15:08:36 -- host/auth.sh@85 -- # gen_key sha512 64 00:27:50.968 15:08:36 -- host/auth.sh@53 -- # local digest len file key 00:27:50.968 15:08:36 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:50.968 15:08:36 -- host/auth.sh@54 -- # local -A digests 00:27:50.968 15:08:36 -- host/auth.sh@56 -- # digest=sha512 00:27:50.968 15:08:36 -- host/auth.sh@56 -- # len=64 00:27:50.968 15:08:36 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:50.968 15:08:36 -- host/auth.sh@57 -- # key=f705a2c0fed21444732c2a562206d3fd483858eba65132ffba0653ff93a224e1 00:27:50.968 15:08:36 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:27:50.969 15:08:36 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.aL5 00:27:50.969 15:08:36 -- host/auth.sh@59 -- # format_dhchap_key f705a2c0fed21444732c2a562206d3fd483858eba65132ffba0653ff93a224e1 3 00:27:50.969 15:08:36 -- nvmf/common.sh@708 -- # format_key DHHC-1 f705a2c0fed21444732c2a562206d3fd483858eba65132ffba0653ff93a224e1 3 00:27:50.969 15:08:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:50.969 15:08:36 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:50.969 15:08:36 -- nvmf/common.sh@693 -- # key=f705a2c0fed21444732c2a562206d3fd483858eba65132ffba0653ff93a224e1 00:27:50.969 15:08:36 -- nvmf/common.sh@693 -- # digest=3 00:27:50.969 15:08:36 -- nvmf/common.sh@694 -- # python - 00:27:50.969 15:08:36 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.aL5 00:27:50.969 15:08:36 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.aL5 00:27:50.969 15:08:36 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.aL5 00:27:50.969 15:08:36 -- host/auth.sh@87 -- # waitforlisten 3887218 00:27:50.969 15:08:36 -- common/autotest_common.sh@817 -- # '[' -z 3887218 ']' 00:27:50.969 15:08:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.969 15:08:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:50.969 15:08:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.969 15:08:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:50.969 15:08:36 -- common/autotest_common.sh@10 -- # set +x 00:27:51.227 15:08:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:51.227 15:08:36 -- common/autotest_common.sh@850 -- # return 0 00:27:51.227 15:08:36 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:51.227 15:08:36 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uuV 00:27:51.227 15:08:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.227 15:08:36 -- common/autotest_common.sh@10 -- # set +x 00:27:51.227 15:08:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.227 15:08:36 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:51.227 15:08:36 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.zuB 00:27:51.227 15:08:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.227 15:08:36 -- common/autotest_common.sh@10 -- # set +x 00:27:51.227 15:08:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.227 15:08:36 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:51.227 15:08:36 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.XZV 00:27:51.227 15:08:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.227 15:08:36 -- common/autotest_common.sh@10 -- # set +x 00:27:51.227 15:08:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.227 15:08:36 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:51.227 15:08:36 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.CIv 00:27:51.227 15:08:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.227 15:08:36 -- common/autotest_common.sh@10 -- # set +x 00:27:51.227 15:08:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.227 15:08:36 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:51.227 15:08:36 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.aL5 00:27:51.227 15:08:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.227 15:08:36 -- common/autotest_common.sh@10 -- # set +x 00:27:51.227 15:08:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.227 15:08:36 -- host/auth.sh@92 -- # nvmet_auth_init 00:27:51.227 15:08:36 -- host/auth.sh@35 -- # get_main_ns_ip 00:27:51.227 15:08:36 -- nvmf/common.sh@717 -- # local ip 00:27:51.227 15:08:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:51.227 15:08:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:51.227 15:08:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.227 15:08:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.227 15:08:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:51.227 15:08:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.227 15:08:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:51.227 15:08:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:51.227 15:08:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:51.227 15:08:36 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:51.227 15:08:36 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:51.227 15:08:36 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:27:51.227 15:08:36 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:51.227 15:08:36 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:51.227 15:08:36 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:51.227 15:08:36 -- nvmf/common.sh@628 -- # local block nvme 00:27:51.227 15:08:36 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:27:51.227 15:08:36 -- nvmf/common.sh@631 -- # modprobe nvmet 00:27:51.227 15:08:36 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:51.227 15:08:36 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:52.601 Waiting for block devices as requested 00:27:52.601 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:27:52.601 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:52.601 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:52.601 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:52.601 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:52.859 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:52.859 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:52.859 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:52.859 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:52.859 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:53.117 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:53.117 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:53.117 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:53.117 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:53.375 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:53.375 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:53.375 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:53.943 15:08:39 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:53.943 15:08:39 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:53.943 15:08:39 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:27:53.943 15:08:39 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:53.943 15:08:39 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:53.943 15:08:39 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:53.943 15:08:39 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:27:53.943 15:08:39 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:53.943 15:08:39 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:53.943 No valid GPT data, bailing 00:27:53.943 15:08:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:53.943 15:08:39 -- scripts/common.sh@391 -- # pt= 00:27:53.943 15:08:39 -- scripts/common.sh@392 -- # return 1 00:27:53.943 15:08:39 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:27:53.943 15:08:39 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:27:53.943 15:08:39 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:53.943 15:08:39 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:53.943 15:08:39 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:53.943 15:08:39 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:53.943 15:08:39 -- nvmf/common.sh@656 -- # echo 1 00:27:53.943 15:08:39 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:27:53.943 15:08:39 -- nvmf/common.sh@658 -- # echo 1 00:27:53.943 15:08:39 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:27:53.943 15:08:39 -- nvmf/common.sh@661 -- # echo tcp 00:27:53.943 15:08:39 -- nvmf/common.sh@662 -- # echo 4420 00:27:53.943 15:08:39 -- nvmf/common.sh@663 -- # echo ipv4 00:27:53.943 15:08:39 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:53.943 15:08:39 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:27:53.943 00:27:53.943 Discovery Log Number of Records 2, Generation counter 2 00:27:53.943 =====Discovery Log Entry 0====== 00:27:53.943 trtype: tcp 00:27:53.943 adrfam: ipv4 00:27:53.943 subtype: current discovery subsystem 00:27:53.943 treq: not specified, sq flow control disable supported 00:27:53.943 portid: 1 00:27:53.943 trsvcid: 4420 00:27:53.943 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:53.943 traddr: 10.0.0.1 00:27:53.943 eflags: none 00:27:53.943 sectype: none 00:27:53.943 =====Discovery Log Entry 1====== 00:27:53.943 trtype: tcp 00:27:53.943 adrfam: ipv4 00:27:53.943 subtype: nvme subsystem 00:27:53.943 treq: not specified, sq flow control disable supported 00:27:53.943 portid: 1 00:27:53.943 trsvcid: 4420 00:27:53.943 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:53.943 traddr: 10.0.0.1 00:27:53.943 eflags: none 00:27:53.943 sectype: none 00:27:53.943 15:08:39 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:53.943 15:08:39 -- host/auth.sh@37 -- # echo 0 00:27:53.943 15:08:39 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:53.943 15:08:39 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:53.944 15:08:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:53.944 15:08:39 -- host/auth.sh@44 -- # digest=sha256 00:27:53.944 15:08:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.944 15:08:39 -- host/auth.sh@44 -- # keyid=1 00:27:53.944 15:08:39 -- host/auth.sh@45 -- # key=DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:27:53.944 15:08:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:53.944 15:08:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:53.944 15:08:39 -- host/auth.sh@49 -- # echo DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:27:53.944 15:08:39 -- host/auth.sh@100 -- # IFS=, 00:27:53.944 15:08:39 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:27:53.944 15:08:39 -- host/auth.sh@100 -- # IFS=, 00:27:53.944 15:08:39 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:53.944 15:08:39 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:53.944 15:08:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:53.944 15:08:39 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:27:53.944 15:08:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:53.944 15:08:39 -- host/auth.sh@68 -- # keyid=1 00:27:53.944 15:08:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:53.944 15:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.944 15:08:39 -- common/autotest_common.sh@10 -- # set +x 00:27:53.944 15:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.944 15:08:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:53.944 15:08:39 -- nvmf/common.sh@717 -- # local ip 00:27:53.944 15:08:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:53.944 15:08:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:53.944 15:08:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.944 15:08:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.944 15:08:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:53.944 15:08:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.944 15:08:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:53.944 15:08:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:53.944 15:08:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:53.944 15:08:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:53.944 15:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.944 15:08:39 -- common/autotest_common.sh@10 -- # set +x 00:27:53.944 nvme0n1 00:27:53.944 15:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.944 15:08:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.944 15:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.944 15:08:39 -- common/autotest_common.sh@10 -- # set +x 00:27:53.944 15:08:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:53.944 15:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.944 15:08:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.944 15:08:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.944 15:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.944 15:08:39 -- common/autotest_common.sh@10 -- # set +x 00:27:54.234 15:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.234 15:08:39 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:27:54.234 15:08:39 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:54.234 15:08:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:54.234 15:08:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:54.234 15:08:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:54.234 15:08:39 -- host/auth.sh@44 -- # digest=sha256 00:27:54.234 15:08:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.234 15:08:39 -- host/auth.sh@44 -- # keyid=0 00:27:54.234 15:08:39 -- host/auth.sh@45 -- # key=DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:27:54.234 15:08:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:54.234 15:08:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:54.234 15:08:39 -- host/auth.sh@49 -- # echo DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:27:54.234 15:08:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:27:54.234 15:08:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:54.234 15:08:39 -- host/auth.sh@68 -- # digest=sha256 00:27:54.234 15:08:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:54.234 15:08:39 -- host/auth.sh@68 -- # keyid=0 00:27:54.234 15:08:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:54.234 15:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.234 15:08:39 -- common/autotest_common.sh@10 -- # set +x 00:27:54.235 15:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.235 15:08:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:54.235 15:08:39 -- nvmf/common.sh@717 -- # local ip 00:27:54.235 15:08:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:54.235 15:08:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:54.235 15:08:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.235 15:08:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.235 15:08:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:54.235 15:08:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.235 15:08:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:54.235 15:08:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:54.235 15:08:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:54.235 15:08:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:54.235 15:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.235 15:08:39 -- common/autotest_common.sh@10 -- # set +x 00:27:54.235 nvme0n1 00:27:54.235 15:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.235 15:08:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.235 15:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.235 15:08:39 -- common/autotest_common.sh@10 -- # set +x 00:27:54.235 15:08:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:54.235 15:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.235 15:08:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.235 15:08:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.235 15:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.235 15:08:39 -- common/autotest_common.sh@10 -- # set +x 00:27:54.235 15:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.235 15:08:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:54.235 15:08:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:54.235 15:08:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:54.235 15:08:39 -- host/auth.sh@44 -- # digest=sha256 00:27:54.235 15:08:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.235 15:08:39 -- host/auth.sh@44 -- # keyid=1 00:27:54.235 15:08:39 -- host/auth.sh@45 -- # key=DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:27:54.235 15:08:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:54.235 15:08:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:54.235 15:08:39 -- host/auth.sh@49 -- # echo DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:27:54.235 15:08:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:27:54.235 15:08:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:54.235 15:08:39 -- host/auth.sh@68 -- # digest=sha256 00:27:54.235 15:08:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:54.235 15:08:39 -- host/auth.sh@68 -- # keyid=1 00:27:54.235 15:08:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:54.235 15:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.235 15:08:39 -- common/autotest_common.sh@10 -- # set +x 00:27:54.235 15:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.235 15:08:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:54.235 15:08:39 -- nvmf/common.sh@717 -- # local ip 00:27:54.235 15:08:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:54.235 15:08:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:54.235 15:08:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.235 15:08:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.235 15:08:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:54.235 15:08:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.235 15:08:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:54.235 15:08:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:54.235 15:08:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:54.235 15:08:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:54.235 15:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.235 15:08:39 -- common/autotest_common.sh@10 -- # set +x 00:27:54.494 nvme0n1 00:27:54.494 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.494 15:08:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.494 15:08:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:54.494 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.494 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:54.494 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.494 15:08:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.494 15:08:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.494 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.494 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:54.494 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.494 15:08:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:54.494 15:08:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:54.494 15:08:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:54.494 15:08:40 -- host/auth.sh@44 -- # digest=sha256 00:27:54.494 15:08:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.494 15:08:40 -- host/auth.sh@44 -- # keyid=2 00:27:54.494 15:08:40 -- host/auth.sh@45 -- # key=DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:27:54.494 15:08:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:54.494 15:08:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:54.494 15:08:40 -- host/auth.sh@49 -- # echo DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:27:54.494 15:08:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:27:54.494 15:08:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:54.494 15:08:40 -- host/auth.sh@68 -- # digest=sha256 00:27:54.494 15:08:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:54.494 15:08:40 -- host/auth.sh@68 -- # keyid=2 00:27:54.494 15:08:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:54.494 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.494 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:54.494 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.494 15:08:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:54.494 15:08:40 -- nvmf/common.sh@717 -- # local ip 00:27:54.494 15:08:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:54.494 15:08:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:54.494 15:08:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.494 15:08:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.494 15:08:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:54.494 15:08:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.494 15:08:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:54.494 15:08:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:54.494 15:08:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:54.494 15:08:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:54.494 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.494 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:54.494 nvme0n1 00:27:54.494 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.494 15:08:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.494 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.494 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:54.494 15:08:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:54.494 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.753 15:08:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.753 15:08:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.753 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.753 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:54.753 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.753 15:08:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:54.753 15:08:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:54.753 15:08:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:54.753 15:08:40 -- host/auth.sh@44 -- # digest=sha256 00:27:54.753 15:08:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.753 15:08:40 -- host/auth.sh@44 -- # keyid=3 00:27:54.753 15:08:40 -- host/auth.sh@45 -- # key=DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:27:54.753 15:08:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:54.753 15:08:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:54.753 15:08:40 -- host/auth.sh@49 -- # echo DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:27:54.753 15:08:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:27:54.753 15:08:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:54.753 15:08:40 -- host/auth.sh@68 -- # digest=sha256 00:27:54.753 15:08:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:54.753 15:08:40 -- host/auth.sh@68 -- # keyid=3 00:27:54.753 15:08:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:54.753 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.753 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:54.753 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.753 15:08:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:54.753 15:08:40 -- nvmf/common.sh@717 -- # local ip 00:27:54.753 15:08:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:54.753 15:08:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:54.753 15:08:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.753 15:08:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.753 15:08:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:54.753 15:08:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.753 15:08:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:54.753 15:08:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:54.753 15:08:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:54.753 15:08:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:54.753 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.753 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:54.753 nvme0n1 00:27:54.753 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.753 15:08:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.753 15:08:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:54.753 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.753 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:54.753 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.753 15:08:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.753 15:08:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.753 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.753 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:54.753 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.753 15:08:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:54.753 15:08:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:54.753 15:08:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:54.753 15:08:40 -- host/auth.sh@44 -- # digest=sha256 00:27:54.753 15:08:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.753 15:08:40 -- host/auth.sh@44 -- # keyid=4 00:27:54.753 15:08:40 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:27:54.753 15:08:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:54.753 15:08:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:54.753 15:08:40 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:27:54.753 15:08:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:27:54.753 15:08:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:54.753 15:08:40 -- host/auth.sh@68 -- # digest=sha256 00:27:54.753 15:08:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:54.753 15:08:40 -- host/auth.sh@68 -- # keyid=4 00:27:54.753 15:08:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:54.753 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.753 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:54.753 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.753 15:08:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:54.753 15:08:40 -- nvmf/common.sh@717 -- # local ip 00:27:54.753 15:08:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:54.753 15:08:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:54.753 15:08:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.753 15:08:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.753 15:08:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:54.753 15:08:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.753 15:08:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:54.753 15:08:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:54.753 15:08:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:54.753 15:08:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:54.753 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.753 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:55.013 nvme0n1 00:27:55.013 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.013 15:08:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.013 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.013 15:08:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:55.013 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:55.013 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.013 15:08:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.013 15:08:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.013 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.013 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:55.013 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.013 15:08:40 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.013 15:08:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:55.013 15:08:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:55.013 15:08:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:55.013 15:08:40 -- host/auth.sh@44 -- # digest=sha256 00:27:55.013 15:08:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:55.013 15:08:40 -- host/auth.sh@44 -- # keyid=0 00:27:55.013 15:08:40 -- host/auth.sh@45 -- # key=DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:27:55.013 15:08:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:55.013 15:08:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:55.013 15:08:40 -- host/auth.sh@49 -- # echo DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:27:55.013 15:08:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:27:55.013 15:08:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:55.013 15:08:40 -- host/auth.sh@68 -- # digest=sha256 00:27:55.013 15:08:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:55.013 15:08:40 -- host/auth.sh@68 -- # keyid=0 00:27:55.013 15:08:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:55.013 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.013 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:55.013 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.013 15:08:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:55.013 15:08:40 -- nvmf/common.sh@717 -- # local ip 00:27:55.013 15:08:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:55.013 15:08:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:55.013 15:08:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.013 15:08:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.013 15:08:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:55.013 15:08:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.013 15:08:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:55.013 15:08:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:55.013 15:08:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:55.013 15:08:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:55.013 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.013 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:55.271 nvme0n1 00:27:55.271 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.271 15:08:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.271 15:08:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:55.271 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.271 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:55.271 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.271 15:08:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.271 15:08:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.271 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.271 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:55.271 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.271 15:08:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:55.271 15:08:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:55.271 15:08:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:55.271 15:08:40 -- host/auth.sh@44 -- # digest=sha256 00:27:55.271 15:08:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:55.271 15:08:40 -- host/auth.sh@44 -- # keyid=1 00:27:55.271 15:08:40 -- host/auth.sh@45 -- # key=DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:27:55.271 15:08:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:55.271 15:08:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:55.271 15:08:40 -- host/auth.sh@49 -- # echo DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:27:55.271 15:08:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:27:55.271 15:08:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:55.271 15:08:40 -- host/auth.sh@68 -- # digest=sha256 00:27:55.271 15:08:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:55.271 15:08:40 -- host/auth.sh@68 -- # keyid=1 00:27:55.271 15:08:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:55.271 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.271 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:55.271 15:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.271 15:08:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:55.271 15:08:40 -- nvmf/common.sh@717 -- # local ip 00:27:55.271 15:08:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:55.271 15:08:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:55.271 15:08:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.271 15:08:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.271 15:08:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:55.271 15:08:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.271 15:08:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:55.271 15:08:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:55.271 15:08:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:55.271 15:08:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:55.271 15:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.271 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:27:55.271 nvme0n1 00:27:55.271 15:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.271 15:08:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.271 15:08:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:55.271 15:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.271 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:27:55.529 15:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.529 15:08:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.529 15:08:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.529 15:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.529 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:27:55.529 15:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.529 15:08:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:55.529 15:08:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:55.529 15:08:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:55.529 15:08:41 -- host/auth.sh@44 -- # digest=sha256 00:27:55.529 15:08:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:55.529 15:08:41 -- host/auth.sh@44 -- # keyid=2 00:27:55.529 15:08:41 -- host/auth.sh@45 -- # key=DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:27:55.529 15:08:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:55.529 15:08:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:55.529 15:08:41 -- host/auth.sh@49 -- # echo DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:27:55.529 15:08:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:27:55.529 15:08:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:55.529 15:08:41 -- host/auth.sh@68 -- # digest=sha256 00:27:55.529 15:08:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:55.529 15:08:41 -- host/auth.sh@68 -- # keyid=2 00:27:55.529 15:08:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:55.529 15:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.529 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:27:55.529 15:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.529 15:08:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:55.529 15:08:41 -- nvmf/common.sh@717 -- # local ip 00:27:55.529 15:08:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:55.529 15:08:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:55.529 15:08:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.529 15:08:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.529 15:08:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:55.529 15:08:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.529 15:08:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:55.529 15:08:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:55.529 15:08:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:55.529 15:08:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:55.529 15:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.529 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:27:55.529 nvme0n1 00:27:55.529 15:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.529 15:08:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.529 15:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.529 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:27:55.530 15:08:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:55.530 15:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.787 15:08:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.787 15:08:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.787 15:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.787 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:27:55.787 15:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.787 15:08:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:55.787 15:08:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:55.787 15:08:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:55.787 15:08:41 -- host/auth.sh@44 -- # digest=sha256 00:27:55.787 15:08:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:55.787 15:08:41 -- host/auth.sh@44 -- # keyid=3 00:27:55.787 15:08:41 -- host/auth.sh@45 -- # key=DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:27:55.787 15:08:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:55.787 15:08:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:55.787 15:08:41 -- host/auth.sh@49 -- # echo DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:27:55.787 15:08:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:27:55.787 15:08:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:55.787 15:08:41 -- host/auth.sh@68 -- # digest=sha256 00:27:55.787 15:08:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:55.787 15:08:41 -- host/auth.sh@68 -- # keyid=3 00:27:55.787 15:08:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:55.787 15:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.787 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:27:55.787 15:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.787 15:08:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:55.787 15:08:41 -- nvmf/common.sh@717 -- # local ip 00:27:55.787 15:08:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:55.787 15:08:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:55.787 15:08:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.787 15:08:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.787 15:08:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:55.787 15:08:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.787 15:08:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:55.787 15:08:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:55.787 15:08:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:55.787 15:08:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:55.787 15:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.787 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:27:55.787 nvme0n1 00:27:55.787 15:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.787 15:08:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.787 15:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.787 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:27:55.787 15:08:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:55.787 15:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.045 15:08:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.045 15:08:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.045 15:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.045 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:27:56.045 15:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.045 15:08:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:56.045 15:08:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:56.045 15:08:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:56.045 15:08:41 -- host/auth.sh@44 -- # digest=sha256 00:27:56.045 15:08:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:56.045 15:08:41 -- host/auth.sh@44 -- # keyid=4 00:27:56.045 15:08:41 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:27:56.045 15:08:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:56.045 15:08:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:56.045 15:08:41 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:27:56.045 15:08:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:27:56.045 15:08:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:56.045 15:08:41 -- host/auth.sh@68 -- # digest=sha256 00:27:56.045 15:08:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:56.045 15:08:41 -- host/auth.sh@68 -- # keyid=4 00:27:56.045 15:08:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:56.045 15:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.045 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:27:56.045 15:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.045 15:08:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:56.045 15:08:41 -- nvmf/common.sh@717 -- # local ip 00:27:56.045 15:08:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:56.045 15:08:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:56.045 15:08:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.045 15:08:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.045 15:08:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:56.045 15:08:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.045 15:08:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:56.045 15:08:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:56.045 15:08:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:56.045 15:08:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:56.045 15:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.045 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:27:56.045 nvme0n1 00:27:56.045 15:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.045 15:08:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.045 15:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.045 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:27:56.045 15:08:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:56.045 15:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.045 15:08:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.045 15:08:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.045 15:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.045 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:27:56.304 15:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.304 15:08:41 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:56.304 15:08:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:56.304 15:08:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:56.304 15:08:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:56.304 15:08:41 -- host/auth.sh@44 -- # digest=sha256 00:27:56.304 15:08:41 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.304 15:08:41 -- host/auth.sh@44 -- # keyid=0 00:27:56.304 15:08:41 -- host/auth.sh@45 -- # key=DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:27:56.304 15:08:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:56.304 15:08:41 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:56.304 15:08:41 -- host/auth.sh@49 -- # echo DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:27:56.304 15:08:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:27:56.304 15:08:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:56.304 15:08:41 -- host/auth.sh@68 -- # digest=sha256 00:27:56.304 15:08:41 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:56.304 15:08:41 -- host/auth.sh@68 -- # keyid=0 00:27:56.304 15:08:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:56.304 15:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.304 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:27:56.304 15:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.304 15:08:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:56.304 15:08:41 -- nvmf/common.sh@717 -- # local ip 00:27:56.304 15:08:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:56.304 15:08:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:56.304 15:08:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.304 15:08:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.304 15:08:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:56.304 15:08:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.304 15:08:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:56.304 15:08:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:56.304 15:08:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:56.304 15:08:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:56.304 15:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.304 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:27:56.563 nvme0n1 00:27:56.563 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.563 15:08:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.563 15:08:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:56.563 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.563 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:27:56.563 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.563 15:08:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.563 15:08:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.563 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.563 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:27:56.563 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.563 15:08:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:56.563 15:08:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:56.563 15:08:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:56.563 15:08:42 -- host/auth.sh@44 -- # digest=sha256 00:27:56.563 15:08:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.563 15:08:42 -- host/auth.sh@44 -- # keyid=1 00:27:56.563 15:08:42 -- host/auth.sh@45 -- # key=DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:27:56.563 15:08:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:56.563 15:08:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:56.563 15:08:42 -- host/auth.sh@49 -- # echo DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:27:56.563 15:08:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:27:56.563 15:08:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:56.563 15:08:42 -- host/auth.sh@68 -- # digest=sha256 00:27:56.563 15:08:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:56.563 15:08:42 -- host/auth.sh@68 -- # keyid=1 00:27:56.563 15:08:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:56.563 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.563 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:27:56.563 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.563 15:08:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:56.563 15:08:42 -- nvmf/common.sh@717 -- # local ip 00:27:56.563 15:08:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:56.563 15:08:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:56.563 15:08:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.563 15:08:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.563 15:08:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:56.563 15:08:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.563 15:08:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:56.563 15:08:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:56.563 15:08:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:56.563 15:08:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:56.563 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.563 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:27:56.822 nvme0n1 00:27:56.822 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.822 15:08:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.822 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.822 15:08:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:56.822 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:27:56.822 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.822 15:08:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.822 15:08:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.822 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.822 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:27:56.822 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.822 15:08:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:56.822 15:08:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:56.822 15:08:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:56.822 15:08:42 -- host/auth.sh@44 -- # digest=sha256 00:27:56.822 15:08:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.822 15:08:42 -- host/auth.sh@44 -- # keyid=2 00:27:56.822 15:08:42 -- host/auth.sh@45 -- # key=DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:27:56.822 15:08:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:56.822 15:08:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:56.822 15:08:42 -- host/auth.sh@49 -- # echo DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:27:56.822 15:08:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:27:56.822 15:08:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:56.822 15:08:42 -- host/auth.sh@68 -- # digest=sha256 00:27:56.822 15:08:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:56.822 15:08:42 -- host/auth.sh@68 -- # keyid=2 00:27:56.822 15:08:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:56.822 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.822 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:27:56.822 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.822 15:08:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:56.822 15:08:42 -- nvmf/common.sh@717 -- # local ip 00:27:56.822 15:08:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:56.822 15:08:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:56.822 15:08:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.822 15:08:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.822 15:08:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:56.822 15:08:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.822 15:08:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:56.822 15:08:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:56.822 15:08:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:56.822 15:08:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:56.822 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.822 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:27:57.080 nvme0n1 00:27:57.080 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.339 15:08:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.339 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.339 15:08:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:57.339 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:27:57.339 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.339 15:08:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.339 15:08:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.339 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.339 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:27:57.339 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.339 15:08:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:57.339 15:08:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:57.339 15:08:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:57.339 15:08:42 -- host/auth.sh@44 -- # digest=sha256 00:27:57.339 15:08:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:57.339 15:08:42 -- host/auth.sh@44 -- # keyid=3 00:27:57.339 15:08:42 -- host/auth.sh@45 -- # key=DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:27:57.339 15:08:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:57.339 15:08:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:57.339 15:08:42 -- host/auth.sh@49 -- # echo DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:27:57.339 15:08:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:27:57.339 15:08:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:57.339 15:08:42 -- host/auth.sh@68 -- # digest=sha256 00:27:57.339 15:08:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:57.339 15:08:42 -- host/auth.sh@68 -- # keyid=3 00:27:57.339 15:08:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:57.339 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.339 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:27:57.339 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.339 15:08:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:57.339 15:08:42 -- nvmf/common.sh@717 -- # local ip 00:27:57.339 15:08:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:57.339 15:08:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:57.339 15:08:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.339 15:08:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.339 15:08:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:57.339 15:08:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.339 15:08:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:57.339 15:08:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:57.339 15:08:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:57.339 15:08:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:57.339 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.339 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:27:57.599 nvme0n1 00:27:57.599 15:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.599 15:08:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.599 15:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.599 15:08:43 -- common/autotest_common.sh@10 -- # set +x 00:27:57.599 15:08:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:57.599 15:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.599 15:08:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.599 15:08:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.599 15:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.599 15:08:43 -- common/autotest_common.sh@10 -- # set +x 00:27:57.599 15:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.599 15:08:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:57.599 15:08:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:57.599 15:08:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:57.599 15:08:43 -- host/auth.sh@44 -- # digest=sha256 00:27:57.599 15:08:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:57.599 15:08:43 -- host/auth.sh@44 -- # keyid=4 00:27:57.599 15:08:43 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:27:57.599 15:08:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:57.599 15:08:43 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:57.599 15:08:43 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:27:57.599 15:08:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:27:57.599 15:08:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:57.599 15:08:43 -- host/auth.sh@68 -- # digest=sha256 00:27:57.599 15:08:43 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:57.599 15:08:43 -- host/auth.sh@68 -- # keyid=4 00:27:57.599 15:08:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:57.599 15:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.599 15:08:43 -- common/autotest_common.sh@10 -- # set +x 00:27:57.599 15:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.599 15:08:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:57.599 15:08:43 -- nvmf/common.sh@717 -- # local ip 00:27:57.599 15:08:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:57.599 15:08:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:57.599 15:08:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.599 15:08:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.599 15:08:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:57.599 15:08:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.599 15:08:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:57.599 15:08:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:57.599 15:08:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:57.599 15:08:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.599 15:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.599 15:08:43 -- common/autotest_common.sh@10 -- # set +x 00:27:57.858 nvme0n1 00:27:57.858 15:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.858 15:08:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.858 15:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.858 15:08:43 -- common/autotest_common.sh@10 -- # set +x 00:27:57.858 15:08:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:58.116 15:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.116 15:08:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.116 15:08:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.116 15:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.116 15:08:43 -- common/autotest_common.sh@10 -- # set +x 00:27:58.116 15:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.116 15:08:43 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:58.116 15:08:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:58.116 15:08:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:58.116 15:08:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:58.116 15:08:43 -- host/auth.sh@44 -- # digest=sha256 00:27:58.116 15:08:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:58.116 15:08:43 -- host/auth.sh@44 -- # keyid=0 00:27:58.116 15:08:43 -- host/auth.sh@45 -- # key=DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:27:58.116 15:08:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:58.116 15:08:43 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:58.116 15:08:43 -- host/auth.sh@49 -- # echo DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:27:58.116 15:08:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:27:58.116 15:08:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:58.116 15:08:43 -- host/auth.sh@68 -- # digest=sha256 00:27:58.116 15:08:43 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:58.116 15:08:43 -- host/auth.sh@68 -- # keyid=0 00:27:58.116 15:08:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:58.116 15:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.116 15:08:43 -- common/autotest_common.sh@10 -- # set +x 00:27:58.116 15:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.116 15:08:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:58.116 15:08:43 -- nvmf/common.sh@717 -- # local ip 00:27:58.116 15:08:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:58.116 15:08:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:58.116 15:08:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.116 15:08:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.116 15:08:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:58.116 15:08:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.116 15:08:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:58.116 15:08:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:58.116 15:08:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:58.116 15:08:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:58.116 15:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.116 15:08:43 -- common/autotest_common.sh@10 -- # set +x 00:27:58.681 nvme0n1 00:27:58.681 15:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.681 15:08:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.681 15:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.681 15:08:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:58.681 15:08:44 -- common/autotest_common.sh@10 -- # set +x 00:27:58.681 15:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.681 15:08:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.681 15:08:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.681 15:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.681 15:08:44 -- common/autotest_common.sh@10 -- # set +x 00:27:58.681 15:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.681 15:08:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:58.681 15:08:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:58.681 15:08:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:58.681 15:08:44 -- host/auth.sh@44 -- # digest=sha256 00:27:58.681 15:08:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:58.681 15:08:44 -- host/auth.sh@44 -- # keyid=1 00:27:58.681 15:08:44 -- host/auth.sh@45 -- # key=DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:27:58.681 15:08:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:58.681 15:08:44 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:58.681 15:08:44 -- host/auth.sh@49 -- # echo DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:27:58.681 15:08:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:27:58.681 15:08:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:58.681 15:08:44 -- host/auth.sh@68 -- # digest=sha256 00:27:58.681 15:08:44 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:58.682 15:08:44 -- host/auth.sh@68 -- # keyid=1 00:27:58.682 15:08:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:58.682 15:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.682 15:08:44 -- common/autotest_common.sh@10 -- # set +x 00:27:58.682 15:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.682 15:08:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:58.682 15:08:44 -- nvmf/common.sh@717 -- # local ip 00:27:58.682 15:08:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:58.682 15:08:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:58.682 15:08:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.682 15:08:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.682 15:08:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:58.682 15:08:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.682 15:08:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:58.682 15:08:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:58.682 15:08:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:58.682 15:08:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:58.682 15:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.682 15:08:44 -- common/autotest_common.sh@10 -- # set +x 00:27:59.248 nvme0n1 00:27:59.248 15:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.248 15:08:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.248 15:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.248 15:08:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:59.248 15:08:44 -- common/autotest_common.sh@10 -- # set +x 00:27:59.248 15:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.248 15:08:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.248 15:08:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.248 15:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.248 15:08:44 -- common/autotest_common.sh@10 -- # set +x 00:27:59.248 15:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.248 15:08:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:59.248 15:08:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:59.248 15:08:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:59.248 15:08:44 -- host/auth.sh@44 -- # digest=sha256 00:27:59.248 15:08:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:59.248 15:08:44 -- host/auth.sh@44 -- # keyid=2 00:27:59.248 15:08:44 -- host/auth.sh@45 -- # key=DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:27:59.248 15:08:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:59.248 15:08:44 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:59.248 15:08:44 -- host/auth.sh@49 -- # echo DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:27:59.248 15:08:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:27:59.248 15:08:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:59.248 15:08:44 -- host/auth.sh@68 -- # digest=sha256 00:27:59.248 15:08:44 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:59.248 15:08:44 -- host/auth.sh@68 -- # keyid=2 00:27:59.248 15:08:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:59.248 15:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.248 15:08:44 -- common/autotest_common.sh@10 -- # set +x 00:27:59.248 15:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.248 15:08:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:59.248 15:08:44 -- nvmf/common.sh@717 -- # local ip 00:27:59.248 15:08:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:59.248 15:08:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:59.248 15:08:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.248 15:08:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.248 15:08:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:59.248 15:08:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.248 15:08:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:59.248 15:08:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:59.248 15:08:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:59.248 15:08:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:59.248 15:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.248 15:08:44 -- common/autotest_common.sh@10 -- # set +x 00:27:59.813 nvme0n1 00:27:59.813 15:08:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.813 15:08:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.813 15:08:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:59.813 15:08:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.813 15:08:45 -- common/autotest_common.sh@10 -- # set +x 00:27:59.813 15:08:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.813 15:08:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.813 15:08:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.813 15:08:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.813 15:08:45 -- common/autotest_common.sh@10 -- # set +x 00:27:59.813 15:08:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.813 15:08:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:59.813 15:08:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:59.813 15:08:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:59.813 15:08:45 -- host/auth.sh@44 -- # digest=sha256 00:27:59.813 15:08:45 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:59.813 15:08:45 -- host/auth.sh@44 -- # keyid=3 00:27:59.813 15:08:45 -- host/auth.sh@45 -- # key=DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:27:59.813 15:08:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:59.813 15:08:45 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:59.813 15:08:45 -- host/auth.sh@49 -- # echo DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:27:59.813 15:08:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:27:59.813 15:08:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:59.813 15:08:45 -- host/auth.sh@68 -- # digest=sha256 00:27:59.813 15:08:45 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:59.813 15:08:45 -- host/auth.sh@68 -- # keyid=3 00:27:59.813 15:08:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:59.813 15:08:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.813 15:08:45 -- common/autotest_common.sh@10 -- # set +x 00:27:59.813 15:08:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.813 15:08:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:59.813 15:08:45 -- nvmf/common.sh@717 -- # local ip 00:27:59.813 15:08:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:59.813 15:08:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:59.813 15:08:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.813 15:08:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.813 15:08:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:59.813 15:08:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.813 15:08:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:59.813 15:08:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:59.813 15:08:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:59.813 15:08:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:59.813 15:08:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.813 15:08:45 -- common/autotest_common.sh@10 -- # set +x 00:28:00.380 nvme0n1 00:28:00.380 15:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.380 15:08:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.380 15:08:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:00.380 15:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.380 15:08:46 -- common/autotest_common.sh@10 -- # set +x 00:28:00.380 15:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.380 15:08:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.380 15:08:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.380 15:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.380 15:08:46 -- common/autotest_common.sh@10 -- # set +x 00:28:00.380 15:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.380 15:08:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:00.380 15:08:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:00.380 15:08:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:00.380 15:08:46 -- host/auth.sh@44 -- # digest=sha256 00:28:00.380 15:08:46 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:00.380 15:08:46 -- host/auth.sh@44 -- # keyid=4 00:28:00.380 15:08:46 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:00.380 15:08:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:00.380 15:08:46 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:00.380 15:08:46 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:00.380 15:08:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:28:00.380 15:08:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:00.380 15:08:46 -- host/auth.sh@68 -- # digest=sha256 00:28:00.380 15:08:46 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:00.380 15:08:46 -- host/auth.sh@68 -- # keyid=4 00:28:00.380 15:08:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:00.380 15:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.380 15:08:46 -- common/autotest_common.sh@10 -- # set +x 00:28:00.380 15:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.380 15:08:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:00.380 15:08:46 -- nvmf/common.sh@717 -- # local ip 00:28:00.380 15:08:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:00.380 15:08:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:00.380 15:08:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.380 15:08:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.380 15:08:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:00.380 15:08:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.380 15:08:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:00.380 15:08:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:00.380 15:08:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:00.380 15:08:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:00.380 15:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.380 15:08:46 -- common/autotest_common.sh@10 -- # set +x 00:28:00.945 nvme0n1 00:28:00.945 15:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.945 15:08:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.945 15:08:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:00.945 15:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.945 15:08:46 -- common/autotest_common.sh@10 -- # set +x 00:28:00.946 15:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.203 15:08:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.203 15:08:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.203 15:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.203 15:08:46 -- common/autotest_common.sh@10 -- # set +x 00:28:01.203 15:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.203 15:08:46 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:01.203 15:08:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:01.203 15:08:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:01.203 15:08:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:01.203 15:08:46 -- host/auth.sh@44 -- # digest=sha256 00:28:01.203 15:08:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:01.203 15:08:46 -- host/auth.sh@44 -- # keyid=0 00:28:01.203 15:08:46 -- host/auth.sh@45 -- # key=DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:01.203 15:08:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:01.203 15:08:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:01.203 15:08:46 -- host/auth.sh@49 -- # echo DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:01.203 15:08:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:28:01.203 15:08:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:01.203 15:08:46 -- host/auth.sh@68 -- # digest=sha256 00:28:01.203 15:08:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:01.203 15:08:46 -- host/auth.sh@68 -- # keyid=0 00:28:01.203 15:08:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:01.203 15:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.203 15:08:46 -- common/autotest_common.sh@10 -- # set +x 00:28:01.203 15:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.204 15:08:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:01.204 15:08:46 -- nvmf/common.sh@717 -- # local ip 00:28:01.204 15:08:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:01.204 15:08:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:01.204 15:08:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.204 15:08:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.204 15:08:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:01.204 15:08:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.204 15:08:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:01.204 15:08:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:01.204 15:08:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:01.204 15:08:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:01.204 15:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.204 15:08:46 -- common/autotest_common.sh@10 -- # set +x 00:28:02.138 nvme0n1 00:28:02.138 15:08:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.138 15:08:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.138 15:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.138 15:08:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:02.138 15:08:47 -- common/autotest_common.sh@10 -- # set +x 00:28:02.138 15:08:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.138 15:08:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.138 15:08:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.138 15:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.138 15:08:47 -- common/autotest_common.sh@10 -- # set +x 00:28:02.138 15:08:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.138 15:08:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:02.138 15:08:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:02.138 15:08:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:02.138 15:08:47 -- host/auth.sh@44 -- # digest=sha256 00:28:02.138 15:08:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:02.138 15:08:47 -- host/auth.sh@44 -- # keyid=1 00:28:02.138 15:08:47 -- host/auth.sh@45 -- # key=DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:02.138 15:08:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:02.138 15:08:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:02.138 15:08:47 -- host/auth.sh@49 -- # echo DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:02.138 15:08:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:28:02.138 15:08:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:02.138 15:08:47 -- host/auth.sh@68 -- # digest=sha256 00:28:02.138 15:08:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:02.138 15:08:47 -- host/auth.sh@68 -- # keyid=1 00:28:02.138 15:08:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:02.138 15:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.138 15:08:47 -- common/autotest_common.sh@10 -- # set +x 00:28:02.138 15:08:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.138 15:08:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:02.138 15:08:47 -- nvmf/common.sh@717 -- # local ip 00:28:02.138 15:08:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:02.138 15:08:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:02.138 15:08:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.138 15:08:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.138 15:08:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:02.138 15:08:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.138 15:08:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:02.138 15:08:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:02.138 15:08:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:02.138 15:08:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:02.138 15:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.138 15:08:47 -- common/autotest_common.sh@10 -- # set +x 00:28:03.070 nvme0n1 00:28:03.070 15:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.070 15:08:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.070 15:08:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:03.070 15:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.070 15:08:48 -- common/autotest_common.sh@10 -- # set +x 00:28:03.070 15:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.328 15:08:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.328 15:08:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.328 15:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.328 15:08:48 -- common/autotest_common.sh@10 -- # set +x 00:28:03.328 15:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.328 15:08:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:03.328 15:08:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:03.328 15:08:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:03.328 15:08:48 -- host/auth.sh@44 -- # digest=sha256 00:28:03.328 15:08:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:03.328 15:08:48 -- host/auth.sh@44 -- # keyid=2 00:28:03.328 15:08:48 -- host/auth.sh@45 -- # key=DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:03.328 15:08:48 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:03.328 15:08:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:03.328 15:08:48 -- host/auth.sh@49 -- # echo DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:03.328 15:08:48 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:28:03.328 15:08:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:03.328 15:08:48 -- host/auth.sh@68 -- # digest=sha256 00:28:03.328 15:08:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:03.328 15:08:48 -- host/auth.sh@68 -- # keyid=2 00:28:03.328 15:08:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:03.328 15:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.328 15:08:48 -- common/autotest_common.sh@10 -- # set +x 00:28:03.328 15:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.328 15:08:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:03.328 15:08:48 -- nvmf/common.sh@717 -- # local ip 00:28:03.328 15:08:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:03.328 15:08:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:03.328 15:08:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.328 15:08:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.328 15:08:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:03.328 15:08:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.328 15:08:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:03.328 15:08:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:03.328 15:08:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:03.328 15:08:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:03.328 15:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.328 15:08:48 -- common/autotest_common.sh@10 -- # set +x 00:28:04.262 nvme0n1 00:28:04.262 15:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.262 15:08:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.262 15:08:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:04.262 15:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.262 15:08:49 -- common/autotest_common.sh@10 -- # set +x 00:28:04.262 15:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.262 15:08:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.262 15:08:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.262 15:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.262 15:08:49 -- common/autotest_common.sh@10 -- # set +x 00:28:04.262 15:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.262 15:08:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:04.262 15:08:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:04.262 15:08:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:04.262 15:08:49 -- host/auth.sh@44 -- # digest=sha256 00:28:04.262 15:08:49 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:04.262 15:08:49 -- host/auth.sh@44 -- # keyid=3 00:28:04.262 15:08:49 -- host/auth.sh@45 -- # key=DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:04.262 15:08:49 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:04.262 15:08:49 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:04.262 15:08:49 -- host/auth.sh@49 -- # echo DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:04.262 15:08:49 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:28:04.262 15:08:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:04.262 15:08:49 -- host/auth.sh@68 -- # digest=sha256 00:28:04.262 15:08:49 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:04.262 15:08:49 -- host/auth.sh@68 -- # keyid=3 00:28:04.262 15:08:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:04.262 15:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.262 15:08:49 -- common/autotest_common.sh@10 -- # set +x 00:28:04.262 15:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.262 15:08:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:04.262 15:08:49 -- nvmf/common.sh@717 -- # local ip 00:28:04.262 15:08:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:04.262 15:08:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:04.262 15:08:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.262 15:08:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.262 15:08:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:04.262 15:08:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.262 15:08:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:04.262 15:08:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:04.262 15:08:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:04.262 15:08:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:04.262 15:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.262 15:08:49 -- common/autotest_common.sh@10 -- # set +x 00:28:05.636 nvme0n1 00:28:05.636 15:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.636 15:08:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.636 15:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.636 15:08:50 -- common/autotest_common.sh@10 -- # set +x 00:28:05.636 15:08:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:05.636 15:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.636 15:08:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.636 15:08:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.636 15:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.636 15:08:50 -- common/autotest_common.sh@10 -- # set +x 00:28:05.636 15:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.636 15:08:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:05.636 15:08:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:05.636 15:08:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:05.636 15:08:51 -- host/auth.sh@44 -- # digest=sha256 00:28:05.636 15:08:51 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:05.636 15:08:51 -- host/auth.sh@44 -- # keyid=4 00:28:05.636 15:08:51 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:05.636 15:08:51 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:05.636 15:08:51 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:05.636 15:08:51 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:05.636 15:08:51 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:28:05.636 15:08:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:05.636 15:08:51 -- host/auth.sh@68 -- # digest=sha256 00:28:05.636 15:08:51 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:05.636 15:08:51 -- host/auth.sh@68 -- # keyid=4 00:28:05.636 15:08:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:05.636 15:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.636 15:08:51 -- common/autotest_common.sh@10 -- # set +x 00:28:05.636 15:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.636 15:08:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:05.636 15:08:51 -- nvmf/common.sh@717 -- # local ip 00:28:05.636 15:08:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:05.636 15:08:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:05.636 15:08:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.636 15:08:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.636 15:08:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:05.636 15:08:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.636 15:08:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:05.636 15:08:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:05.636 15:08:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:05.636 15:08:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:05.636 15:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.636 15:08:51 -- common/autotest_common.sh@10 -- # set +x 00:28:06.570 nvme0n1 00:28:06.570 15:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.570 15:08:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.570 15:08:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:06.570 15:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.570 15:08:51 -- common/autotest_common.sh@10 -- # set +x 00:28:06.570 15:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.570 15:08:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.570 15:08:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.570 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.570 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:06.570 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.570 15:08:52 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:28:06.570 15:08:52 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.570 15:08:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:06.570 15:08:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:06.570 15:08:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:06.570 15:08:52 -- host/auth.sh@44 -- # digest=sha384 00:28:06.570 15:08:52 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.570 15:08:52 -- host/auth.sh@44 -- # keyid=0 00:28:06.570 15:08:52 -- host/auth.sh@45 -- # key=DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:06.570 15:08:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:06.570 15:08:52 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:06.570 15:08:52 -- host/auth.sh@49 -- # echo DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:06.571 15:08:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:28:06.571 15:08:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:06.571 15:08:52 -- host/auth.sh@68 -- # digest=sha384 00:28:06.571 15:08:52 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:06.571 15:08:52 -- host/auth.sh@68 -- # keyid=0 00:28:06.571 15:08:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:06.571 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.571 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:06.571 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.571 15:08:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:06.571 15:08:52 -- nvmf/common.sh@717 -- # local ip 00:28:06.571 15:08:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:06.571 15:08:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:06.571 15:08:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.571 15:08:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.571 15:08:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:06.571 15:08:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.571 15:08:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:06.571 15:08:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:06.571 15:08:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:06.571 15:08:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:06.571 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.571 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:06.571 nvme0n1 00:28:06.571 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.571 15:08:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.571 15:08:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:06.571 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.571 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:06.571 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.571 15:08:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.571 15:08:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.571 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.571 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:06.571 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.571 15:08:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:06.571 15:08:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:06.571 15:08:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:06.571 15:08:52 -- host/auth.sh@44 -- # digest=sha384 00:28:06.571 15:08:52 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.571 15:08:52 -- host/auth.sh@44 -- # keyid=1 00:28:06.571 15:08:52 -- host/auth.sh@45 -- # key=DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:06.571 15:08:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:06.571 15:08:52 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:06.571 15:08:52 -- host/auth.sh@49 -- # echo DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:06.571 15:08:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:28:06.571 15:08:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:06.571 15:08:52 -- host/auth.sh@68 -- # digest=sha384 00:28:06.571 15:08:52 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:06.571 15:08:52 -- host/auth.sh@68 -- # keyid=1 00:28:06.571 15:08:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:06.571 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.571 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:06.571 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.571 15:08:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:06.571 15:08:52 -- nvmf/common.sh@717 -- # local ip 00:28:06.571 15:08:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:06.571 15:08:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:06.571 15:08:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.571 15:08:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.571 15:08:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:06.571 15:08:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.571 15:08:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:06.571 15:08:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:06.571 15:08:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:06.571 15:08:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:06.571 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.571 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:06.829 nvme0n1 00:28:06.829 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.829 15:08:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.829 15:08:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:06.829 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.829 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:06.829 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.829 15:08:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.829 15:08:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.829 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.829 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:06.829 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.829 15:08:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:06.829 15:08:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:06.829 15:08:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:06.829 15:08:52 -- host/auth.sh@44 -- # digest=sha384 00:28:06.829 15:08:52 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.829 15:08:52 -- host/auth.sh@44 -- # keyid=2 00:28:06.829 15:08:52 -- host/auth.sh@45 -- # key=DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:06.829 15:08:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:06.829 15:08:52 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:06.829 15:08:52 -- host/auth.sh@49 -- # echo DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:06.829 15:08:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:28:06.829 15:08:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:06.829 15:08:52 -- host/auth.sh@68 -- # digest=sha384 00:28:06.829 15:08:52 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:06.829 15:08:52 -- host/auth.sh@68 -- # keyid=2 00:28:06.829 15:08:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:06.829 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.829 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:06.829 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.829 15:08:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:06.829 15:08:52 -- nvmf/common.sh@717 -- # local ip 00:28:06.829 15:08:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:06.829 15:08:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:06.829 15:08:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.829 15:08:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.829 15:08:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:06.829 15:08:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.829 15:08:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:06.829 15:08:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:06.829 15:08:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:06.829 15:08:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:06.829 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.829 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:06.829 nvme0n1 00:28:06.829 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.830 15:08:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.830 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.830 15:08:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:06.830 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:07.088 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.088 15:08:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.088 15:08:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.088 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.088 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:07.088 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.088 15:08:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:07.088 15:08:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:07.088 15:08:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:07.088 15:08:52 -- host/auth.sh@44 -- # digest=sha384 00:28:07.088 15:08:52 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.088 15:08:52 -- host/auth.sh@44 -- # keyid=3 00:28:07.088 15:08:52 -- host/auth.sh@45 -- # key=DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:07.088 15:08:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:07.088 15:08:52 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:07.088 15:08:52 -- host/auth.sh@49 -- # echo DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:07.088 15:08:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:28:07.088 15:08:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:07.088 15:08:52 -- host/auth.sh@68 -- # digest=sha384 00:28:07.088 15:08:52 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:07.088 15:08:52 -- host/auth.sh@68 -- # keyid=3 00:28:07.088 15:08:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:07.088 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.088 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:07.088 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.088 15:08:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:07.088 15:08:52 -- nvmf/common.sh@717 -- # local ip 00:28:07.088 15:08:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:07.088 15:08:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:07.088 15:08:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.088 15:08:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.088 15:08:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:07.088 15:08:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.088 15:08:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:07.088 15:08:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:07.088 15:08:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:07.088 15:08:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:07.088 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.088 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:07.088 nvme0n1 00:28:07.088 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.088 15:08:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.088 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.088 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:07.088 15:08:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:07.088 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.088 15:08:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.088 15:08:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.088 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.088 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:07.088 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.088 15:08:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:07.088 15:08:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:07.088 15:08:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:07.088 15:08:52 -- host/auth.sh@44 -- # digest=sha384 00:28:07.088 15:08:52 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.088 15:08:52 -- host/auth.sh@44 -- # keyid=4 00:28:07.088 15:08:52 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:07.088 15:08:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:07.088 15:08:52 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:07.088 15:08:52 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:07.088 15:08:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:28:07.088 15:08:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:07.088 15:08:52 -- host/auth.sh@68 -- # digest=sha384 00:28:07.088 15:08:52 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:07.088 15:08:52 -- host/auth.sh@68 -- # keyid=4 00:28:07.088 15:08:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:07.088 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.088 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:07.346 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.346 15:08:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:07.346 15:08:52 -- nvmf/common.sh@717 -- # local ip 00:28:07.346 15:08:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:07.346 15:08:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:07.346 15:08:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.346 15:08:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.346 15:08:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:07.346 15:08:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.346 15:08:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:07.346 15:08:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:07.346 15:08:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:07.346 15:08:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:07.346 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.346 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:07.346 nvme0n1 00:28:07.346 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.346 15:08:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.346 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.346 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:07.346 15:08:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:07.346 15:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.346 15:08:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.346 15:08:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.346 15:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.347 15:08:52 -- common/autotest_common.sh@10 -- # set +x 00:28:07.347 15:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.347 15:08:53 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:07.347 15:08:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:07.347 15:08:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:07.347 15:08:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:07.347 15:08:53 -- host/auth.sh@44 -- # digest=sha384 00:28:07.347 15:08:53 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.347 15:08:53 -- host/auth.sh@44 -- # keyid=0 00:28:07.347 15:08:53 -- host/auth.sh@45 -- # key=DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:07.347 15:08:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:07.347 15:08:53 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:07.347 15:08:53 -- host/auth.sh@49 -- # echo DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:07.347 15:08:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:28:07.347 15:08:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:07.347 15:08:53 -- host/auth.sh@68 -- # digest=sha384 00:28:07.347 15:08:53 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:07.347 15:08:53 -- host/auth.sh@68 -- # keyid=0 00:28:07.347 15:08:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:07.347 15:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.347 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:07.347 15:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.347 15:08:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:07.347 15:08:53 -- nvmf/common.sh@717 -- # local ip 00:28:07.347 15:08:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:07.347 15:08:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:07.347 15:08:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.347 15:08:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.347 15:08:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:07.347 15:08:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.347 15:08:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:07.347 15:08:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:07.347 15:08:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:07.347 15:08:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:07.347 15:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.347 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:07.634 nvme0n1 00:28:07.634 15:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.634 15:08:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.634 15:08:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:07.634 15:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.634 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:07.634 15:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.634 15:08:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.634 15:08:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.634 15:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.634 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:07.634 15:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.634 15:08:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:07.634 15:08:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:07.634 15:08:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:07.634 15:08:53 -- host/auth.sh@44 -- # digest=sha384 00:28:07.634 15:08:53 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.634 15:08:53 -- host/auth.sh@44 -- # keyid=1 00:28:07.634 15:08:53 -- host/auth.sh@45 -- # key=DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:07.634 15:08:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:07.634 15:08:53 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:07.634 15:08:53 -- host/auth.sh@49 -- # echo DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:07.634 15:08:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:28:07.634 15:08:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:07.634 15:08:53 -- host/auth.sh@68 -- # digest=sha384 00:28:07.634 15:08:53 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:07.634 15:08:53 -- host/auth.sh@68 -- # keyid=1 00:28:07.634 15:08:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:07.634 15:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.634 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:07.634 15:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.634 15:08:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:07.634 15:08:53 -- nvmf/common.sh@717 -- # local ip 00:28:07.634 15:08:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:07.634 15:08:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:07.634 15:08:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.634 15:08:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.634 15:08:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:07.634 15:08:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.634 15:08:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:07.634 15:08:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:07.634 15:08:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:07.634 15:08:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:07.634 15:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.634 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:07.897 nvme0n1 00:28:07.897 15:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.897 15:08:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.897 15:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.897 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:07.897 15:08:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:07.897 15:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.897 15:08:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.897 15:08:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.897 15:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.897 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:07.897 15:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.897 15:08:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:07.897 15:08:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:07.897 15:08:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:07.897 15:08:53 -- host/auth.sh@44 -- # digest=sha384 00:28:07.897 15:08:53 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.897 15:08:53 -- host/auth.sh@44 -- # keyid=2 00:28:07.897 15:08:53 -- host/auth.sh@45 -- # key=DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:07.897 15:08:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:07.897 15:08:53 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:07.897 15:08:53 -- host/auth.sh@49 -- # echo DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:07.897 15:08:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:28:07.897 15:08:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:07.897 15:08:53 -- host/auth.sh@68 -- # digest=sha384 00:28:07.897 15:08:53 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:07.897 15:08:53 -- host/auth.sh@68 -- # keyid=2 00:28:07.897 15:08:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:07.897 15:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.897 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:07.897 15:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.897 15:08:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:07.897 15:08:53 -- nvmf/common.sh@717 -- # local ip 00:28:07.897 15:08:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:07.897 15:08:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:07.897 15:08:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.897 15:08:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.897 15:08:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:07.897 15:08:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.897 15:08:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:07.897 15:08:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:07.897 15:08:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:07.897 15:08:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:07.897 15:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.897 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:08.156 nvme0n1 00:28:08.156 15:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.156 15:08:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.156 15:08:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:08.156 15:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.156 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:08.156 15:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.156 15:08:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.156 15:08:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.156 15:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.156 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:08.156 15:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.156 15:08:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:08.156 15:08:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:08.156 15:08:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:08.156 15:08:53 -- host/auth.sh@44 -- # digest=sha384 00:28:08.156 15:08:53 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.156 15:08:53 -- host/auth.sh@44 -- # keyid=3 00:28:08.156 15:08:53 -- host/auth.sh@45 -- # key=DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:08.156 15:08:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:08.156 15:08:53 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:08.156 15:08:53 -- host/auth.sh@49 -- # echo DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:08.156 15:08:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:28:08.156 15:08:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:08.156 15:08:53 -- host/auth.sh@68 -- # digest=sha384 00:28:08.156 15:08:53 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:08.156 15:08:53 -- host/auth.sh@68 -- # keyid=3 00:28:08.156 15:08:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:08.156 15:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.156 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:08.156 15:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.156 15:08:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:08.156 15:08:53 -- nvmf/common.sh@717 -- # local ip 00:28:08.156 15:08:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:08.156 15:08:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:08.156 15:08:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.156 15:08:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.156 15:08:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:08.156 15:08:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.156 15:08:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:08.156 15:08:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:08.156 15:08:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:08.156 15:08:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:08.156 15:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.156 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:08.414 nvme0n1 00:28:08.414 15:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.414 15:08:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.414 15:08:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:08.414 15:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.414 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:08.414 15:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.414 15:08:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.414 15:08:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.414 15:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.414 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:08.414 15:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.414 15:08:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:08.414 15:08:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:08.414 15:08:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:08.414 15:08:53 -- host/auth.sh@44 -- # digest=sha384 00:28:08.414 15:08:53 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.414 15:08:53 -- host/auth.sh@44 -- # keyid=4 00:28:08.414 15:08:53 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:08.414 15:08:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:08.414 15:08:53 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:08.414 15:08:53 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:08.414 15:08:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:28:08.414 15:08:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:08.414 15:08:53 -- host/auth.sh@68 -- # digest=sha384 00:28:08.414 15:08:53 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:08.414 15:08:53 -- host/auth.sh@68 -- # keyid=4 00:28:08.414 15:08:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:08.414 15:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.414 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:08.414 15:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.414 15:08:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:08.414 15:08:53 -- nvmf/common.sh@717 -- # local ip 00:28:08.414 15:08:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:08.414 15:08:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:08.414 15:08:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.414 15:08:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.414 15:08:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:08.414 15:08:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.414 15:08:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:08.414 15:08:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:08.414 15:08:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:08.414 15:08:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.414 15:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.414 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:28:08.672 nvme0n1 00:28:08.672 15:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.672 15:08:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.672 15:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.672 15:08:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:08.672 15:08:54 -- common/autotest_common.sh@10 -- # set +x 00:28:08.672 15:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.672 15:08:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.672 15:08:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.672 15:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.672 15:08:54 -- common/autotest_common.sh@10 -- # set +x 00:28:08.672 15:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.672 15:08:54 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:08.672 15:08:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:08.672 15:08:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:08.672 15:08:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:08.672 15:08:54 -- host/auth.sh@44 -- # digest=sha384 00:28:08.672 15:08:54 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.672 15:08:54 -- host/auth.sh@44 -- # keyid=0 00:28:08.672 15:08:54 -- host/auth.sh@45 -- # key=DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:08.672 15:08:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:08.672 15:08:54 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:08.672 15:08:54 -- host/auth.sh@49 -- # echo DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:08.672 15:08:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:28:08.672 15:08:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:08.672 15:08:54 -- host/auth.sh@68 -- # digest=sha384 00:28:08.672 15:08:54 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:08.672 15:08:54 -- host/auth.sh@68 -- # keyid=0 00:28:08.672 15:08:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:08.672 15:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.672 15:08:54 -- common/autotest_common.sh@10 -- # set +x 00:28:08.672 15:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.672 15:08:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:08.672 15:08:54 -- nvmf/common.sh@717 -- # local ip 00:28:08.672 15:08:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:08.672 15:08:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:08.672 15:08:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.672 15:08:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.672 15:08:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:08.672 15:08:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.672 15:08:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:08.672 15:08:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:08.672 15:08:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:08.672 15:08:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:08.672 15:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.672 15:08:54 -- common/autotest_common.sh@10 -- # set +x 00:28:08.930 nvme0n1 00:28:08.930 15:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.930 15:08:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.930 15:08:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:08.930 15:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.930 15:08:54 -- common/autotest_common.sh@10 -- # set +x 00:28:08.930 15:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.930 15:08:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.930 15:08:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.930 15:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.930 15:08:54 -- common/autotest_common.sh@10 -- # set +x 00:28:08.930 15:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.930 15:08:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:08.930 15:08:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:08.930 15:08:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:08.930 15:08:54 -- host/auth.sh@44 -- # digest=sha384 00:28:08.930 15:08:54 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.930 15:08:54 -- host/auth.sh@44 -- # keyid=1 00:28:08.930 15:08:54 -- host/auth.sh@45 -- # key=DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:08.930 15:08:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:08.930 15:08:54 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:08.930 15:08:54 -- host/auth.sh@49 -- # echo DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:08.930 15:08:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:28:08.930 15:08:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:08.930 15:08:54 -- host/auth.sh@68 -- # digest=sha384 00:28:08.930 15:08:54 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:08.930 15:08:54 -- host/auth.sh@68 -- # keyid=1 00:28:08.930 15:08:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:08.930 15:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.930 15:08:54 -- common/autotest_common.sh@10 -- # set +x 00:28:08.930 15:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.930 15:08:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:08.930 15:08:54 -- nvmf/common.sh@717 -- # local ip 00:28:08.930 15:08:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:08.930 15:08:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:08.930 15:08:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.930 15:08:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.930 15:08:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:08.930 15:08:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.930 15:08:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:08.930 15:08:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:08.930 15:08:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:08.930 15:08:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:08.930 15:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.930 15:08:54 -- common/autotest_common.sh@10 -- # set +x 00:28:09.186 nvme0n1 00:28:09.186 15:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.186 15:08:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.186 15:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.186 15:08:54 -- common/autotest_common.sh@10 -- # set +x 00:28:09.186 15:08:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:09.186 15:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.186 15:08:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.186 15:08:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.186 15:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.186 15:08:54 -- common/autotest_common.sh@10 -- # set +x 00:28:09.443 15:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.443 15:08:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:09.443 15:08:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:09.443 15:08:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:09.443 15:08:54 -- host/auth.sh@44 -- # digest=sha384 00:28:09.443 15:08:54 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.443 15:08:54 -- host/auth.sh@44 -- # keyid=2 00:28:09.443 15:08:54 -- host/auth.sh@45 -- # key=DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:09.443 15:08:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:09.443 15:08:54 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:09.443 15:08:54 -- host/auth.sh@49 -- # echo DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:09.443 15:08:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:28:09.443 15:08:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:09.443 15:08:54 -- host/auth.sh@68 -- # digest=sha384 00:28:09.443 15:08:54 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:09.443 15:08:54 -- host/auth.sh@68 -- # keyid=2 00:28:09.443 15:08:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:09.443 15:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.443 15:08:54 -- common/autotest_common.sh@10 -- # set +x 00:28:09.443 15:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.443 15:08:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:09.443 15:08:54 -- nvmf/common.sh@717 -- # local ip 00:28:09.443 15:08:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:09.444 15:08:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:09.444 15:08:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.444 15:08:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.444 15:08:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:09.444 15:08:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.444 15:08:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:09.444 15:08:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:09.444 15:08:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:09.444 15:08:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:09.444 15:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.444 15:08:54 -- common/autotest_common.sh@10 -- # set +x 00:28:09.701 nvme0n1 00:28:09.701 15:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.701 15:08:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.701 15:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.701 15:08:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:09.701 15:08:55 -- common/autotest_common.sh@10 -- # set +x 00:28:09.701 15:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.701 15:08:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.701 15:08:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.701 15:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.701 15:08:55 -- common/autotest_common.sh@10 -- # set +x 00:28:09.701 15:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.701 15:08:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:09.701 15:08:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:09.701 15:08:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:09.702 15:08:55 -- host/auth.sh@44 -- # digest=sha384 00:28:09.702 15:08:55 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.702 15:08:55 -- host/auth.sh@44 -- # keyid=3 00:28:09.702 15:08:55 -- host/auth.sh@45 -- # key=DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:09.702 15:08:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:09.702 15:08:55 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:09.702 15:08:55 -- host/auth.sh@49 -- # echo DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:09.702 15:08:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:28:09.702 15:08:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:09.702 15:08:55 -- host/auth.sh@68 -- # digest=sha384 00:28:09.702 15:08:55 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:09.702 15:08:55 -- host/auth.sh@68 -- # keyid=3 00:28:09.702 15:08:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:09.702 15:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.702 15:08:55 -- common/autotest_common.sh@10 -- # set +x 00:28:09.702 15:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.702 15:08:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:09.702 15:08:55 -- nvmf/common.sh@717 -- # local ip 00:28:09.702 15:08:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:09.702 15:08:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:09.702 15:08:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.702 15:08:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.702 15:08:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:09.702 15:08:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.702 15:08:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:09.702 15:08:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:09.702 15:08:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:09.702 15:08:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:09.702 15:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.702 15:08:55 -- common/autotest_common.sh@10 -- # set +x 00:28:09.959 nvme0n1 00:28:09.959 15:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.959 15:08:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.959 15:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.959 15:08:55 -- common/autotest_common.sh@10 -- # set +x 00:28:09.959 15:08:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:09.959 15:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.959 15:08:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.959 15:08:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.959 15:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.959 15:08:55 -- common/autotest_common.sh@10 -- # set +x 00:28:09.959 15:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.959 15:08:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:09.959 15:08:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:09.959 15:08:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:09.959 15:08:55 -- host/auth.sh@44 -- # digest=sha384 00:28:09.959 15:08:55 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.959 15:08:55 -- host/auth.sh@44 -- # keyid=4 00:28:09.959 15:08:55 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:09.959 15:08:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:09.959 15:08:55 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:09.959 15:08:55 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:09.959 15:08:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:28:09.959 15:08:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:09.959 15:08:55 -- host/auth.sh@68 -- # digest=sha384 00:28:09.959 15:08:55 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:09.959 15:08:55 -- host/auth.sh@68 -- # keyid=4 00:28:09.959 15:08:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:09.959 15:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.959 15:08:55 -- common/autotest_common.sh@10 -- # set +x 00:28:09.959 15:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.959 15:08:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:09.959 15:08:55 -- nvmf/common.sh@717 -- # local ip 00:28:09.959 15:08:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:09.959 15:08:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:09.959 15:08:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.959 15:08:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.959 15:08:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:09.959 15:08:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.959 15:08:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:09.959 15:08:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:09.959 15:08:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:09.959 15:08:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:09.959 15:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.959 15:08:55 -- common/autotest_common.sh@10 -- # set +x 00:28:10.217 nvme0n1 00:28:10.217 15:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.217 15:08:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.217 15:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.217 15:08:55 -- common/autotest_common.sh@10 -- # set +x 00:28:10.217 15:08:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:10.217 15:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.474 15:08:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.474 15:08:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.474 15:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.474 15:08:55 -- common/autotest_common.sh@10 -- # set +x 00:28:10.474 15:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.474 15:08:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.474 15:08:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:10.474 15:08:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:10.474 15:08:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:10.474 15:08:55 -- host/auth.sh@44 -- # digest=sha384 00:28:10.474 15:08:55 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.474 15:08:55 -- host/auth.sh@44 -- # keyid=0 00:28:10.474 15:08:55 -- host/auth.sh@45 -- # key=DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:10.474 15:08:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:10.474 15:08:55 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:10.474 15:08:55 -- host/auth.sh@49 -- # echo DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:10.474 15:08:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:28:10.474 15:08:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:10.474 15:08:55 -- host/auth.sh@68 -- # digest=sha384 00:28:10.474 15:08:55 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:10.474 15:08:55 -- host/auth.sh@68 -- # keyid=0 00:28:10.474 15:08:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:10.474 15:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.474 15:08:55 -- common/autotest_common.sh@10 -- # set +x 00:28:10.474 15:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.474 15:08:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:10.474 15:08:55 -- nvmf/common.sh@717 -- # local ip 00:28:10.474 15:08:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:10.474 15:08:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:10.474 15:08:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.474 15:08:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.474 15:08:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:10.474 15:08:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.474 15:08:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:10.474 15:08:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:10.474 15:08:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:10.474 15:08:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:10.474 15:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.474 15:08:55 -- common/autotest_common.sh@10 -- # set +x 00:28:11.040 nvme0n1 00:28:11.040 15:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.040 15:08:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.040 15:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.040 15:08:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:11.040 15:08:56 -- common/autotest_common.sh@10 -- # set +x 00:28:11.040 15:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.040 15:08:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.040 15:08:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.040 15:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.040 15:08:56 -- common/autotest_common.sh@10 -- # set +x 00:28:11.040 15:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.040 15:08:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:11.040 15:08:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:11.040 15:08:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:11.040 15:08:56 -- host/auth.sh@44 -- # digest=sha384 00:28:11.040 15:08:56 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.040 15:08:56 -- host/auth.sh@44 -- # keyid=1 00:28:11.040 15:08:56 -- host/auth.sh@45 -- # key=DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:11.040 15:08:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:11.040 15:08:56 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:11.040 15:08:56 -- host/auth.sh@49 -- # echo DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:11.040 15:08:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:28:11.040 15:08:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:11.040 15:08:56 -- host/auth.sh@68 -- # digest=sha384 00:28:11.040 15:08:56 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:11.040 15:08:56 -- host/auth.sh@68 -- # keyid=1 00:28:11.040 15:08:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:11.040 15:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.040 15:08:56 -- common/autotest_common.sh@10 -- # set +x 00:28:11.040 15:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.040 15:08:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:11.040 15:08:56 -- nvmf/common.sh@717 -- # local ip 00:28:11.040 15:08:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:11.040 15:08:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:11.040 15:08:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.040 15:08:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.041 15:08:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:11.041 15:08:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.041 15:08:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:11.041 15:08:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:11.041 15:08:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:11.041 15:08:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:11.041 15:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.041 15:08:56 -- common/autotest_common.sh@10 -- # set +x 00:28:11.606 nvme0n1 00:28:11.606 15:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.606 15:08:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.606 15:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.606 15:08:57 -- common/autotest_common.sh@10 -- # set +x 00:28:11.606 15:08:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:11.606 15:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.606 15:08:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.606 15:08:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.606 15:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.606 15:08:57 -- common/autotest_common.sh@10 -- # set +x 00:28:11.606 15:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.606 15:08:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:11.606 15:08:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:11.606 15:08:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:11.606 15:08:57 -- host/auth.sh@44 -- # digest=sha384 00:28:11.606 15:08:57 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.606 15:08:57 -- host/auth.sh@44 -- # keyid=2 00:28:11.606 15:08:57 -- host/auth.sh@45 -- # key=DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:11.606 15:08:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:11.606 15:08:57 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:11.606 15:08:57 -- host/auth.sh@49 -- # echo DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:11.606 15:08:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:28:11.606 15:08:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:11.606 15:08:57 -- host/auth.sh@68 -- # digest=sha384 00:28:11.606 15:08:57 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:11.606 15:08:57 -- host/auth.sh@68 -- # keyid=2 00:28:11.606 15:08:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:11.606 15:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.606 15:08:57 -- common/autotest_common.sh@10 -- # set +x 00:28:11.606 15:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.606 15:08:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:11.606 15:08:57 -- nvmf/common.sh@717 -- # local ip 00:28:11.606 15:08:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:11.606 15:08:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:11.606 15:08:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.606 15:08:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.606 15:08:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:11.606 15:08:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.607 15:08:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:11.607 15:08:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:11.607 15:08:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:11.607 15:08:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:11.607 15:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.607 15:08:57 -- common/autotest_common.sh@10 -- # set +x 00:28:12.173 nvme0n1 00:28:12.173 15:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.173 15:08:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.173 15:08:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:12.173 15:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.173 15:08:57 -- common/autotest_common.sh@10 -- # set +x 00:28:12.173 15:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.173 15:08:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.173 15:08:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.173 15:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.173 15:08:57 -- common/autotest_common.sh@10 -- # set +x 00:28:12.173 15:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.173 15:08:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:12.173 15:08:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:12.173 15:08:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:12.173 15:08:57 -- host/auth.sh@44 -- # digest=sha384 00:28:12.173 15:08:57 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.173 15:08:57 -- host/auth.sh@44 -- # keyid=3 00:28:12.173 15:08:57 -- host/auth.sh@45 -- # key=DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:12.173 15:08:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:12.173 15:08:57 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:12.173 15:08:57 -- host/auth.sh@49 -- # echo DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:12.173 15:08:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:28:12.173 15:08:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:12.173 15:08:57 -- host/auth.sh@68 -- # digest=sha384 00:28:12.173 15:08:57 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:12.173 15:08:57 -- host/auth.sh@68 -- # keyid=3 00:28:12.173 15:08:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:12.173 15:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.173 15:08:57 -- common/autotest_common.sh@10 -- # set +x 00:28:12.173 15:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.173 15:08:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:12.173 15:08:57 -- nvmf/common.sh@717 -- # local ip 00:28:12.173 15:08:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:12.173 15:08:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:12.173 15:08:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.173 15:08:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.173 15:08:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:12.173 15:08:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.173 15:08:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:12.173 15:08:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:12.173 15:08:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:12.173 15:08:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:12.173 15:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.173 15:08:57 -- common/autotest_common.sh@10 -- # set +x 00:28:12.739 nvme0n1 00:28:12.739 15:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.739 15:08:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.739 15:08:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:12.739 15:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.739 15:08:58 -- common/autotest_common.sh@10 -- # set +x 00:28:12.739 15:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.739 15:08:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.739 15:08:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.739 15:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.739 15:08:58 -- common/autotest_common.sh@10 -- # set +x 00:28:12.739 15:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.739 15:08:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:12.739 15:08:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:12.739 15:08:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:12.739 15:08:58 -- host/auth.sh@44 -- # digest=sha384 00:28:12.739 15:08:58 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.739 15:08:58 -- host/auth.sh@44 -- # keyid=4 00:28:12.739 15:08:58 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:12.739 15:08:58 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:12.739 15:08:58 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:12.739 15:08:58 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:12.739 15:08:58 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:28:12.739 15:08:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:12.739 15:08:58 -- host/auth.sh@68 -- # digest=sha384 00:28:12.739 15:08:58 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:12.739 15:08:58 -- host/auth.sh@68 -- # keyid=4 00:28:12.739 15:08:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:12.739 15:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.739 15:08:58 -- common/autotest_common.sh@10 -- # set +x 00:28:12.998 15:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.998 15:08:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:12.998 15:08:58 -- nvmf/common.sh@717 -- # local ip 00:28:12.998 15:08:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:12.998 15:08:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:12.998 15:08:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.998 15:08:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.998 15:08:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:12.998 15:08:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.998 15:08:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:12.998 15:08:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:12.998 15:08:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:12.998 15:08:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:12.998 15:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.998 15:08:58 -- common/autotest_common.sh@10 -- # set +x 00:28:13.564 nvme0n1 00:28:13.564 15:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.564 15:08:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.564 15:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.564 15:08:59 -- common/autotest_common.sh@10 -- # set +x 00:28:13.564 15:08:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:13.564 15:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.564 15:08:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.564 15:08:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.564 15:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.564 15:08:59 -- common/autotest_common.sh@10 -- # set +x 00:28:13.564 15:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.564 15:08:59 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:13.564 15:08:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:13.564 15:08:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:13.564 15:08:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:13.564 15:08:59 -- host/auth.sh@44 -- # digest=sha384 00:28:13.564 15:08:59 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:13.564 15:08:59 -- host/auth.sh@44 -- # keyid=0 00:28:13.564 15:08:59 -- host/auth.sh@45 -- # key=DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:13.564 15:08:59 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:13.564 15:08:59 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:13.564 15:08:59 -- host/auth.sh@49 -- # echo DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:13.564 15:08:59 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:28:13.564 15:08:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:13.564 15:08:59 -- host/auth.sh@68 -- # digest=sha384 00:28:13.564 15:08:59 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:13.564 15:08:59 -- host/auth.sh@68 -- # keyid=0 00:28:13.564 15:08:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:13.564 15:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.564 15:08:59 -- common/autotest_common.sh@10 -- # set +x 00:28:13.564 15:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.564 15:08:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:13.564 15:08:59 -- nvmf/common.sh@717 -- # local ip 00:28:13.564 15:08:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:13.564 15:08:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:13.564 15:08:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.564 15:08:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.564 15:08:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:13.564 15:08:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.564 15:08:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:13.564 15:08:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:13.564 15:08:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:13.564 15:08:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:13.564 15:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.564 15:08:59 -- common/autotest_common.sh@10 -- # set +x 00:28:14.498 nvme0n1 00:28:14.498 15:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.498 15:09:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.498 15:09:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:14.498 15:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.498 15:09:00 -- common/autotest_common.sh@10 -- # set +x 00:28:14.498 15:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.498 15:09:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.498 15:09:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.498 15:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.498 15:09:00 -- common/autotest_common.sh@10 -- # set +x 00:28:14.498 15:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.498 15:09:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:14.498 15:09:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:14.498 15:09:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:14.498 15:09:00 -- host/auth.sh@44 -- # digest=sha384 00:28:14.498 15:09:00 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:14.498 15:09:00 -- host/auth.sh@44 -- # keyid=1 00:28:14.498 15:09:00 -- host/auth.sh@45 -- # key=DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:14.498 15:09:00 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:14.498 15:09:00 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:14.498 15:09:00 -- host/auth.sh@49 -- # echo DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:14.498 15:09:00 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:28:14.498 15:09:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:14.498 15:09:00 -- host/auth.sh@68 -- # digest=sha384 00:28:14.498 15:09:00 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:14.498 15:09:00 -- host/auth.sh@68 -- # keyid=1 00:28:14.498 15:09:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:14.498 15:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.498 15:09:00 -- common/autotest_common.sh@10 -- # set +x 00:28:14.498 15:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.498 15:09:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:14.498 15:09:00 -- nvmf/common.sh@717 -- # local ip 00:28:14.498 15:09:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:14.498 15:09:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:14.498 15:09:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.498 15:09:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.498 15:09:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:14.498 15:09:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.498 15:09:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:14.498 15:09:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:14.498 15:09:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:14.498 15:09:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:14.498 15:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.498 15:09:00 -- common/autotest_common.sh@10 -- # set +x 00:28:15.447 nvme0n1 00:28:15.447 15:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.447 15:09:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.447 15:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.447 15:09:01 -- common/autotest_common.sh@10 -- # set +x 00:28:15.448 15:09:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:15.448 15:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.448 15:09:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.448 15:09:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.448 15:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.448 15:09:01 -- common/autotest_common.sh@10 -- # set +x 00:28:15.448 15:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.448 15:09:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:15.448 15:09:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:15.448 15:09:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:15.448 15:09:01 -- host/auth.sh@44 -- # digest=sha384 00:28:15.448 15:09:01 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.448 15:09:01 -- host/auth.sh@44 -- # keyid=2 00:28:15.448 15:09:01 -- host/auth.sh@45 -- # key=DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:15.448 15:09:01 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:15.448 15:09:01 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:15.448 15:09:01 -- host/auth.sh@49 -- # echo DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:15.448 15:09:01 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:28:15.448 15:09:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:15.448 15:09:01 -- host/auth.sh@68 -- # digest=sha384 00:28:15.448 15:09:01 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:15.448 15:09:01 -- host/auth.sh@68 -- # keyid=2 00:28:15.448 15:09:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:15.448 15:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.448 15:09:01 -- common/autotest_common.sh@10 -- # set +x 00:28:15.448 15:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.448 15:09:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:15.448 15:09:01 -- nvmf/common.sh@717 -- # local ip 00:28:15.448 15:09:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:15.448 15:09:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:15.448 15:09:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.448 15:09:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.448 15:09:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:15.448 15:09:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.448 15:09:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:15.448 15:09:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:15.448 15:09:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:15.448 15:09:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:15.448 15:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.448 15:09:01 -- common/autotest_common.sh@10 -- # set +x 00:28:16.383 nvme0n1 00:28:16.383 15:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.383 15:09:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.383 15:09:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:16.383 15:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.383 15:09:02 -- common/autotest_common.sh@10 -- # set +x 00:28:16.383 15:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.641 15:09:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.641 15:09:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.641 15:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.641 15:09:02 -- common/autotest_common.sh@10 -- # set +x 00:28:16.641 15:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.641 15:09:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:16.641 15:09:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:16.641 15:09:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:16.641 15:09:02 -- host/auth.sh@44 -- # digest=sha384 00:28:16.641 15:09:02 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.641 15:09:02 -- host/auth.sh@44 -- # keyid=3 00:28:16.641 15:09:02 -- host/auth.sh@45 -- # key=DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:16.641 15:09:02 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:16.641 15:09:02 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:16.641 15:09:02 -- host/auth.sh@49 -- # echo DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:16.641 15:09:02 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:28:16.641 15:09:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:16.641 15:09:02 -- host/auth.sh@68 -- # digest=sha384 00:28:16.641 15:09:02 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:16.641 15:09:02 -- host/auth.sh@68 -- # keyid=3 00:28:16.641 15:09:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:16.641 15:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.641 15:09:02 -- common/autotest_common.sh@10 -- # set +x 00:28:16.641 15:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.641 15:09:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:16.641 15:09:02 -- nvmf/common.sh@717 -- # local ip 00:28:16.641 15:09:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:16.641 15:09:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:16.641 15:09:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.641 15:09:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.641 15:09:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:16.641 15:09:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.641 15:09:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:16.641 15:09:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:16.641 15:09:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:16.641 15:09:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:16.641 15:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.641 15:09:02 -- common/autotest_common.sh@10 -- # set +x 00:28:17.575 nvme0n1 00:28:17.575 15:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.575 15:09:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.575 15:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.575 15:09:03 -- common/autotest_common.sh@10 -- # set +x 00:28:17.575 15:09:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:17.575 15:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.575 15:09:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.575 15:09:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.575 15:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.575 15:09:03 -- common/autotest_common.sh@10 -- # set +x 00:28:17.575 15:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.575 15:09:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:17.575 15:09:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:17.575 15:09:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:17.575 15:09:03 -- host/auth.sh@44 -- # digest=sha384 00:28:17.575 15:09:03 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:17.575 15:09:03 -- host/auth.sh@44 -- # keyid=4 00:28:17.575 15:09:03 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:17.575 15:09:03 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:17.575 15:09:03 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:17.575 15:09:03 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:17.575 15:09:03 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:28:17.575 15:09:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:17.575 15:09:03 -- host/auth.sh@68 -- # digest=sha384 00:28:17.575 15:09:03 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:17.575 15:09:03 -- host/auth.sh@68 -- # keyid=4 00:28:17.575 15:09:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:17.575 15:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.575 15:09:03 -- common/autotest_common.sh@10 -- # set +x 00:28:17.575 15:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.575 15:09:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:17.575 15:09:03 -- nvmf/common.sh@717 -- # local ip 00:28:17.575 15:09:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:17.575 15:09:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:17.575 15:09:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.575 15:09:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.575 15:09:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:17.575 15:09:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.575 15:09:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:17.575 15:09:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:17.575 15:09:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:17.575 15:09:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:17.575 15:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.575 15:09:03 -- common/autotest_common.sh@10 -- # set +x 00:28:18.507 nvme0n1 00:28:18.507 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.507 15:09:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.507 15:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.507 15:09:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:18.507 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:18.507 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.507 15:09:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.507 15:09:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.507 15:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.507 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:18.507 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.507 15:09:04 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:28:18.507 15:09:04 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.507 15:09:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:18.507 15:09:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:18.507 15:09:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:18.507 15:09:04 -- host/auth.sh@44 -- # digest=sha512 00:28:18.507 15:09:04 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.507 15:09:04 -- host/auth.sh@44 -- # keyid=0 00:28:18.507 15:09:04 -- host/auth.sh@45 -- # key=DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:18.507 15:09:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:18.507 15:09:04 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:18.507 15:09:04 -- host/auth.sh@49 -- # echo DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:18.507 15:09:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:28:18.507 15:09:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:18.507 15:09:04 -- host/auth.sh@68 -- # digest=sha512 00:28:18.507 15:09:04 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:18.507 15:09:04 -- host/auth.sh@68 -- # keyid=0 00:28:18.507 15:09:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:18.507 15:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.507 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:18.507 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.507 15:09:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:18.507 15:09:04 -- nvmf/common.sh@717 -- # local ip 00:28:18.507 15:09:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:18.507 15:09:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:18.507 15:09:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.507 15:09:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.507 15:09:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:18.507 15:09:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.507 15:09:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:18.507 15:09:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:18.507 15:09:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:18.507 15:09:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:18.507 15:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.507 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:18.764 nvme0n1 00:28:18.764 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.764 15:09:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.765 15:09:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:18.765 15:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.765 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:18.765 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.765 15:09:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.765 15:09:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.765 15:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.765 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:18.765 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.765 15:09:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:18.765 15:09:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:18.765 15:09:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:18.765 15:09:04 -- host/auth.sh@44 -- # digest=sha512 00:28:18.765 15:09:04 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.765 15:09:04 -- host/auth.sh@44 -- # keyid=1 00:28:18.765 15:09:04 -- host/auth.sh@45 -- # key=DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:18.765 15:09:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:18.765 15:09:04 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:18.765 15:09:04 -- host/auth.sh@49 -- # echo DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:18.765 15:09:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:28:18.765 15:09:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:18.765 15:09:04 -- host/auth.sh@68 -- # digest=sha512 00:28:18.765 15:09:04 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:18.765 15:09:04 -- host/auth.sh@68 -- # keyid=1 00:28:18.765 15:09:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:18.765 15:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.765 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:18.765 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.765 15:09:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:18.765 15:09:04 -- nvmf/common.sh@717 -- # local ip 00:28:18.765 15:09:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:18.765 15:09:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:18.765 15:09:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.765 15:09:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.765 15:09:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:18.765 15:09:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.765 15:09:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:18.765 15:09:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:18.765 15:09:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:18.765 15:09:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:18.765 15:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.765 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:19.023 nvme0n1 00:28:19.023 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.023 15:09:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.023 15:09:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:19.023 15:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.023 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:19.023 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.023 15:09:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.023 15:09:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.023 15:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.023 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:19.023 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.023 15:09:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:19.023 15:09:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:19.023 15:09:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:19.023 15:09:04 -- host/auth.sh@44 -- # digest=sha512 00:28:19.023 15:09:04 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:19.023 15:09:04 -- host/auth.sh@44 -- # keyid=2 00:28:19.023 15:09:04 -- host/auth.sh@45 -- # key=DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:19.023 15:09:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:19.023 15:09:04 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:19.023 15:09:04 -- host/auth.sh@49 -- # echo DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:19.023 15:09:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:28:19.023 15:09:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:19.023 15:09:04 -- host/auth.sh@68 -- # digest=sha512 00:28:19.023 15:09:04 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:19.023 15:09:04 -- host/auth.sh@68 -- # keyid=2 00:28:19.023 15:09:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:19.023 15:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.023 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:19.023 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.023 15:09:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:19.023 15:09:04 -- nvmf/common.sh@717 -- # local ip 00:28:19.023 15:09:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:19.023 15:09:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:19.023 15:09:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.023 15:09:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.023 15:09:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:19.023 15:09:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.023 15:09:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:19.023 15:09:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:19.023 15:09:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:19.023 15:09:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:19.023 15:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.023 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:19.023 nvme0n1 00:28:19.023 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.023 15:09:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.023 15:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.023 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:19.023 15:09:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:19.023 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.281 15:09:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.281 15:09:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.281 15:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.281 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:19.281 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.281 15:09:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:19.281 15:09:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:19.281 15:09:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:19.281 15:09:04 -- host/auth.sh@44 -- # digest=sha512 00:28:19.281 15:09:04 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:19.281 15:09:04 -- host/auth.sh@44 -- # keyid=3 00:28:19.281 15:09:04 -- host/auth.sh@45 -- # key=DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:19.281 15:09:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:19.281 15:09:04 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:19.281 15:09:04 -- host/auth.sh@49 -- # echo DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:19.281 15:09:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:28:19.281 15:09:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:19.281 15:09:04 -- host/auth.sh@68 -- # digest=sha512 00:28:19.281 15:09:04 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:19.281 15:09:04 -- host/auth.sh@68 -- # keyid=3 00:28:19.281 15:09:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:19.281 15:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.281 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:19.281 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.281 15:09:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:19.281 15:09:04 -- nvmf/common.sh@717 -- # local ip 00:28:19.281 15:09:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:19.281 15:09:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:19.281 15:09:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.281 15:09:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.281 15:09:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:19.281 15:09:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.281 15:09:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:19.281 15:09:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:19.281 15:09:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:19.281 15:09:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:19.281 15:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.281 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:19.281 nvme0n1 00:28:19.281 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.281 15:09:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.281 15:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.281 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:19.281 15:09:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:19.281 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.281 15:09:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.281 15:09:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.281 15:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.281 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:19.281 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.281 15:09:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:19.281 15:09:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:19.281 15:09:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:19.281 15:09:04 -- host/auth.sh@44 -- # digest=sha512 00:28:19.281 15:09:04 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:19.281 15:09:04 -- host/auth.sh@44 -- # keyid=4 00:28:19.281 15:09:04 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:19.281 15:09:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:19.281 15:09:04 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:19.281 15:09:04 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:19.281 15:09:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:28:19.281 15:09:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:19.281 15:09:04 -- host/auth.sh@68 -- # digest=sha512 00:28:19.281 15:09:04 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:19.281 15:09:04 -- host/auth.sh@68 -- # keyid=4 00:28:19.281 15:09:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:19.281 15:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.281 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:19.281 15:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.281 15:09:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:19.281 15:09:05 -- nvmf/common.sh@717 -- # local ip 00:28:19.281 15:09:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:19.281 15:09:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:19.281 15:09:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.281 15:09:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.281 15:09:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:19.281 15:09:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.281 15:09:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:19.281 15:09:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:19.281 15:09:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:19.281 15:09:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:19.281 15:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.281 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:28:19.539 nvme0n1 00:28:19.539 15:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.539 15:09:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.539 15:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.539 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:28:19.539 15:09:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:19.539 15:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.539 15:09:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.539 15:09:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.539 15:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.539 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:28:19.539 15:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.539 15:09:05 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:19.539 15:09:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:19.539 15:09:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:19.539 15:09:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:19.539 15:09:05 -- host/auth.sh@44 -- # digest=sha512 00:28:19.539 15:09:05 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.539 15:09:05 -- host/auth.sh@44 -- # keyid=0 00:28:19.539 15:09:05 -- host/auth.sh@45 -- # key=DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:19.539 15:09:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:19.539 15:09:05 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:19.539 15:09:05 -- host/auth.sh@49 -- # echo DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:19.539 15:09:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:28:19.539 15:09:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:19.539 15:09:05 -- host/auth.sh@68 -- # digest=sha512 00:28:19.539 15:09:05 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:19.539 15:09:05 -- host/auth.sh@68 -- # keyid=0 00:28:19.539 15:09:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:19.539 15:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.539 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:28:19.539 15:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.539 15:09:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:19.539 15:09:05 -- nvmf/common.sh@717 -- # local ip 00:28:19.539 15:09:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:19.539 15:09:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:19.539 15:09:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.539 15:09:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.539 15:09:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:19.539 15:09:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.539 15:09:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:19.539 15:09:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:19.539 15:09:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:19.539 15:09:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:19.539 15:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.539 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:28:19.797 nvme0n1 00:28:19.797 15:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.797 15:09:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.797 15:09:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:19.797 15:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.797 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:28:19.797 15:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.797 15:09:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.797 15:09:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.797 15:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.797 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:28:19.797 15:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.797 15:09:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:19.797 15:09:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:19.797 15:09:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:19.797 15:09:05 -- host/auth.sh@44 -- # digest=sha512 00:28:19.797 15:09:05 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.797 15:09:05 -- host/auth.sh@44 -- # keyid=1 00:28:19.797 15:09:05 -- host/auth.sh@45 -- # key=DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:19.797 15:09:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:19.797 15:09:05 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:19.797 15:09:05 -- host/auth.sh@49 -- # echo DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:19.797 15:09:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:28:19.797 15:09:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:19.797 15:09:05 -- host/auth.sh@68 -- # digest=sha512 00:28:19.797 15:09:05 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:19.798 15:09:05 -- host/auth.sh@68 -- # keyid=1 00:28:19.798 15:09:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:19.798 15:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.798 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:28:19.798 15:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.798 15:09:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:19.798 15:09:05 -- nvmf/common.sh@717 -- # local ip 00:28:19.798 15:09:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:19.798 15:09:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:19.798 15:09:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.798 15:09:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.798 15:09:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:19.798 15:09:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.798 15:09:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:19.798 15:09:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:19.798 15:09:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:19.798 15:09:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:19.798 15:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.798 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:28:20.056 nvme0n1 00:28:20.056 15:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.056 15:09:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.056 15:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.056 15:09:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:20.057 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:28:20.057 15:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.057 15:09:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.057 15:09:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.057 15:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.057 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:28:20.057 15:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.057 15:09:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:20.057 15:09:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:20.057 15:09:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:20.057 15:09:05 -- host/auth.sh@44 -- # digest=sha512 00:28:20.057 15:09:05 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:20.057 15:09:05 -- host/auth.sh@44 -- # keyid=2 00:28:20.057 15:09:05 -- host/auth.sh@45 -- # key=DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:20.057 15:09:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:20.057 15:09:05 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:20.057 15:09:05 -- host/auth.sh@49 -- # echo DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:20.057 15:09:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:28:20.057 15:09:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:20.057 15:09:05 -- host/auth.sh@68 -- # digest=sha512 00:28:20.057 15:09:05 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:20.057 15:09:05 -- host/auth.sh@68 -- # keyid=2 00:28:20.057 15:09:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:20.057 15:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.057 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:28:20.057 15:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.057 15:09:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:20.057 15:09:05 -- nvmf/common.sh@717 -- # local ip 00:28:20.057 15:09:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:20.057 15:09:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:20.057 15:09:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.057 15:09:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.057 15:09:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:20.057 15:09:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.057 15:09:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:20.057 15:09:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:20.057 15:09:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:20.057 15:09:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:20.057 15:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.057 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:28:20.314 nvme0n1 00:28:20.314 15:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.314 15:09:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.314 15:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.314 15:09:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:20.314 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:28:20.314 15:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.314 15:09:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.314 15:09:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.314 15:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.314 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:28:20.314 15:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.314 15:09:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:20.314 15:09:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:20.314 15:09:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:20.314 15:09:05 -- host/auth.sh@44 -- # digest=sha512 00:28:20.314 15:09:05 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:20.314 15:09:05 -- host/auth.sh@44 -- # keyid=3 00:28:20.314 15:09:05 -- host/auth.sh@45 -- # key=DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:20.314 15:09:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:20.314 15:09:05 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:20.314 15:09:05 -- host/auth.sh@49 -- # echo DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:20.314 15:09:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:28:20.314 15:09:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:20.314 15:09:05 -- host/auth.sh@68 -- # digest=sha512 00:28:20.314 15:09:05 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:20.315 15:09:05 -- host/auth.sh@68 -- # keyid=3 00:28:20.315 15:09:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:20.315 15:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.315 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:28:20.315 15:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.315 15:09:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:20.315 15:09:05 -- nvmf/common.sh@717 -- # local ip 00:28:20.315 15:09:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:20.315 15:09:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:20.315 15:09:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.315 15:09:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.315 15:09:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:20.315 15:09:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.315 15:09:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:20.315 15:09:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:20.315 15:09:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:20.315 15:09:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:20.315 15:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.315 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:28:20.315 nvme0n1 00:28:20.315 15:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.315 15:09:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.315 15:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.315 15:09:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:20.315 15:09:06 -- common/autotest_common.sh@10 -- # set +x 00:28:20.315 15:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.573 15:09:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.573 15:09:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.573 15:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.573 15:09:06 -- common/autotest_common.sh@10 -- # set +x 00:28:20.573 15:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.573 15:09:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:20.573 15:09:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:20.573 15:09:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:20.573 15:09:06 -- host/auth.sh@44 -- # digest=sha512 00:28:20.573 15:09:06 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:20.573 15:09:06 -- host/auth.sh@44 -- # keyid=4 00:28:20.573 15:09:06 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:20.573 15:09:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:20.573 15:09:06 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:20.573 15:09:06 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:20.573 15:09:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:28:20.573 15:09:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:20.573 15:09:06 -- host/auth.sh@68 -- # digest=sha512 00:28:20.573 15:09:06 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:20.573 15:09:06 -- host/auth.sh@68 -- # keyid=4 00:28:20.573 15:09:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:20.573 15:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.573 15:09:06 -- common/autotest_common.sh@10 -- # set +x 00:28:20.573 15:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.573 15:09:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:20.573 15:09:06 -- nvmf/common.sh@717 -- # local ip 00:28:20.573 15:09:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:20.573 15:09:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:20.573 15:09:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.573 15:09:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.573 15:09:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:20.573 15:09:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.573 15:09:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:20.573 15:09:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:20.573 15:09:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:20.573 15:09:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:20.573 15:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.573 15:09:06 -- common/autotest_common.sh@10 -- # set +x 00:28:20.573 nvme0n1 00:28:20.573 15:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.573 15:09:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.573 15:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.573 15:09:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:20.573 15:09:06 -- common/autotest_common.sh@10 -- # set +x 00:28:20.573 15:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.573 15:09:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.573 15:09:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.573 15:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.573 15:09:06 -- common/autotest_common.sh@10 -- # set +x 00:28:20.573 15:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.573 15:09:06 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:20.573 15:09:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:20.573 15:09:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:20.573 15:09:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:20.573 15:09:06 -- host/auth.sh@44 -- # digest=sha512 00:28:20.573 15:09:06 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:20.573 15:09:06 -- host/auth.sh@44 -- # keyid=0 00:28:20.573 15:09:06 -- host/auth.sh@45 -- # key=DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:20.573 15:09:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:20.573 15:09:06 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:20.573 15:09:06 -- host/auth.sh@49 -- # echo DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:20.573 15:09:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:28:20.573 15:09:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:20.573 15:09:06 -- host/auth.sh@68 -- # digest=sha512 00:28:20.573 15:09:06 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:20.573 15:09:06 -- host/auth.sh@68 -- # keyid=0 00:28:20.573 15:09:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:20.573 15:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.573 15:09:06 -- common/autotest_common.sh@10 -- # set +x 00:28:20.573 15:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.573 15:09:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:20.573 15:09:06 -- nvmf/common.sh@717 -- # local ip 00:28:20.573 15:09:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:20.573 15:09:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:20.573 15:09:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.573 15:09:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.573 15:09:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:20.573 15:09:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.573 15:09:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:20.573 15:09:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:20.573 15:09:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:20.573 15:09:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:20.573 15:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.573 15:09:06 -- common/autotest_common.sh@10 -- # set +x 00:28:21.166 nvme0n1 00:28:21.166 15:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.166 15:09:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.166 15:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.166 15:09:06 -- common/autotest_common.sh@10 -- # set +x 00:28:21.166 15:09:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:21.166 15:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.166 15:09:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.166 15:09:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.166 15:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.166 15:09:06 -- common/autotest_common.sh@10 -- # set +x 00:28:21.166 15:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.166 15:09:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:21.166 15:09:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:21.166 15:09:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:21.166 15:09:06 -- host/auth.sh@44 -- # digest=sha512 00:28:21.166 15:09:06 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.166 15:09:06 -- host/auth.sh@44 -- # keyid=1 00:28:21.166 15:09:06 -- host/auth.sh@45 -- # key=DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:21.166 15:09:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:21.166 15:09:06 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:21.166 15:09:06 -- host/auth.sh@49 -- # echo DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:21.166 15:09:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:28:21.166 15:09:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:21.166 15:09:06 -- host/auth.sh@68 -- # digest=sha512 00:28:21.166 15:09:06 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:21.166 15:09:06 -- host/auth.sh@68 -- # keyid=1 00:28:21.166 15:09:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:21.166 15:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.166 15:09:06 -- common/autotest_common.sh@10 -- # set +x 00:28:21.166 15:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.166 15:09:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:21.166 15:09:06 -- nvmf/common.sh@717 -- # local ip 00:28:21.166 15:09:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:21.166 15:09:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:21.166 15:09:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.166 15:09:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.166 15:09:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:21.166 15:09:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.166 15:09:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:21.166 15:09:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:21.166 15:09:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:21.166 15:09:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:21.166 15:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.166 15:09:06 -- common/autotest_common.sh@10 -- # set +x 00:28:21.425 nvme0n1 00:28:21.425 15:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.425 15:09:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.425 15:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.425 15:09:06 -- common/autotest_common.sh@10 -- # set +x 00:28:21.425 15:09:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:21.425 15:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.425 15:09:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.425 15:09:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.425 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.425 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:28:21.425 15:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.425 15:09:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:21.425 15:09:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:21.425 15:09:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:21.425 15:09:07 -- host/auth.sh@44 -- # digest=sha512 00:28:21.425 15:09:07 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.425 15:09:07 -- host/auth.sh@44 -- # keyid=2 00:28:21.425 15:09:07 -- host/auth.sh@45 -- # key=DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:21.425 15:09:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:21.425 15:09:07 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:21.425 15:09:07 -- host/auth.sh@49 -- # echo DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:21.425 15:09:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:28:21.425 15:09:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:21.425 15:09:07 -- host/auth.sh@68 -- # digest=sha512 00:28:21.425 15:09:07 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:21.425 15:09:07 -- host/auth.sh@68 -- # keyid=2 00:28:21.425 15:09:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:21.425 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.425 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:28:21.425 15:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.425 15:09:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:21.425 15:09:07 -- nvmf/common.sh@717 -- # local ip 00:28:21.425 15:09:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:21.425 15:09:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:21.425 15:09:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.425 15:09:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.425 15:09:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:21.425 15:09:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.425 15:09:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:21.425 15:09:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:21.425 15:09:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:21.425 15:09:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:21.425 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.425 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:28:21.682 nvme0n1 00:28:21.682 15:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.682 15:09:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.682 15:09:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:21.682 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.682 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:28:21.682 15:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.682 15:09:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.682 15:09:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.682 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.682 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:28:21.682 15:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.682 15:09:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:21.682 15:09:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:21.682 15:09:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:21.682 15:09:07 -- host/auth.sh@44 -- # digest=sha512 00:28:21.682 15:09:07 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.682 15:09:07 -- host/auth.sh@44 -- # keyid=3 00:28:21.682 15:09:07 -- host/auth.sh@45 -- # key=DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:21.682 15:09:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:21.682 15:09:07 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:21.682 15:09:07 -- host/auth.sh@49 -- # echo DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:21.682 15:09:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:28:21.682 15:09:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:21.682 15:09:07 -- host/auth.sh@68 -- # digest=sha512 00:28:21.682 15:09:07 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:21.682 15:09:07 -- host/auth.sh@68 -- # keyid=3 00:28:21.682 15:09:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:21.682 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.682 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:28:21.682 15:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.682 15:09:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:21.682 15:09:07 -- nvmf/common.sh@717 -- # local ip 00:28:21.682 15:09:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:21.682 15:09:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:21.682 15:09:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.682 15:09:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.682 15:09:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:21.682 15:09:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.682 15:09:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:21.682 15:09:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:21.682 15:09:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:21.682 15:09:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:21.682 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.682 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:28:21.939 nvme0n1 00:28:21.939 15:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.939 15:09:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.939 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.939 15:09:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:21.939 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:28:21.939 15:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.939 15:09:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.939 15:09:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.939 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.939 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:28:21.939 15:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.939 15:09:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:21.939 15:09:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:21.939 15:09:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:21.939 15:09:07 -- host/auth.sh@44 -- # digest=sha512 00:28:21.939 15:09:07 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.939 15:09:07 -- host/auth.sh@44 -- # keyid=4 00:28:21.939 15:09:07 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:21.939 15:09:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:21.940 15:09:07 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:21.940 15:09:07 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:21.940 15:09:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:28:21.940 15:09:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:21.940 15:09:07 -- host/auth.sh@68 -- # digest=sha512 00:28:21.940 15:09:07 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:21.940 15:09:07 -- host/auth.sh@68 -- # keyid=4 00:28:21.940 15:09:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:21.940 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.940 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:28:21.940 15:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.940 15:09:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:21.940 15:09:07 -- nvmf/common.sh@717 -- # local ip 00:28:21.940 15:09:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:21.940 15:09:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:21.940 15:09:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.940 15:09:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.940 15:09:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:21.940 15:09:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.940 15:09:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:22.197 15:09:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:22.197 15:09:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:22.197 15:09:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:22.197 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.197 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:28:22.197 nvme0n1 00:28:22.197 15:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.197 15:09:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.197 15:09:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:22.197 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.197 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:28:22.197 15:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.466 15:09:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.466 15:09:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.466 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.466 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:28:22.466 15:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.466 15:09:07 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:22.466 15:09:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:22.466 15:09:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:22.466 15:09:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:22.466 15:09:07 -- host/auth.sh@44 -- # digest=sha512 00:28:22.466 15:09:07 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:22.467 15:09:07 -- host/auth.sh@44 -- # keyid=0 00:28:22.467 15:09:07 -- host/auth.sh@45 -- # key=DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:22.467 15:09:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:22.467 15:09:07 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:22.467 15:09:07 -- host/auth.sh@49 -- # echo DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:22.467 15:09:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:28:22.467 15:09:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:22.467 15:09:07 -- host/auth.sh@68 -- # digest=sha512 00:28:22.467 15:09:07 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:22.467 15:09:07 -- host/auth.sh@68 -- # keyid=0 00:28:22.467 15:09:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:22.467 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.467 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:28:22.467 15:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.467 15:09:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:22.467 15:09:07 -- nvmf/common.sh@717 -- # local ip 00:28:22.467 15:09:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:22.467 15:09:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:22.467 15:09:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.467 15:09:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.467 15:09:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:22.467 15:09:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.467 15:09:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:22.467 15:09:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:22.467 15:09:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:22.467 15:09:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:22.467 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.467 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:28:23.029 nvme0n1 00:28:23.029 15:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.029 15:09:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.029 15:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.029 15:09:08 -- common/autotest_common.sh@10 -- # set +x 00:28:23.029 15:09:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:23.029 15:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.030 15:09:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.030 15:09:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.030 15:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.030 15:09:08 -- common/autotest_common.sh@10 -- # set +x 00:28:23.030 15:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.030 15:09:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:23.030 15:09:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:23.030 15:09:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:23.030 15:09:08 -- host/auth.sh@44 -- # digest=sha512 00:28:23.030 15:09:08 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:23.030 15:09:08 -- host/auth.sh@44 -- # keyid=1 00:28:23.030 15:09:08 -- host/auth.sh@45 -- # key=DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:23.030 15:09:08 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:23.030 15:09:08 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:23.030 15:09:08 -- host/auth.sh@49 -- # echo DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:23.030 15:09:08 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:28:23.030 15:09:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:23.030 15:09:08 -- host/auth.sh@68 -- # digest=sha512 00:28:23.030 15:09:08 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:23.030 15:09:08 -- host/auth.sh@68 -- # keyid=1 00:28:23.030 15:09:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:23.030 15:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.030 15:09:08 -- common/autotest_common.sh@10 -- # set +x 00:28:23.030 15:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.030 15:09:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:23.030 15:09:08 -- nvmf/common.sh@717 -- # local ip 00:28:23.030 15:09:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:23.030 15:09:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:23.030 15:09:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.030 15:09:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.030 15:09:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:23.030 15:09:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.030 15:09:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:23.030 15:09:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:23.030 15:09:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:23.030 15:09:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:23.030 15:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.030 15:09:08 -- common/autotest_common.sh@10 -- # set +x 00:28:23.594 nvme0n1 00:28:23.594 15:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.594 15:09:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.594 15:09:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:23.594 15:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.594 15:09:09 -- common/autotest_common.sh@10 -- # set +x 00:28:23.594 15:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.594 15:09:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.594 15:09:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.594 15:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.594 15:09:09 -- common/autotest_common.sh@10 -- # set +x 00:28:23.594 15:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.594 15:09:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:23.594 15:09:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:23.594 15:09:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:23.594 15:09:09 -- host/auth.sh@44 -- # digest=sha512 00:28:23.594 15:09:09 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:23.594 15:09:09 -- host/auth.sh@44 -- # keyid=2 00:28:23.594 15:09:09 -- host/auth.sh@45 -- # key=DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:23.594 15:09:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:23.594 15:09:09 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:23.594 15:09:09 -- host/auth.sh@49 -- # echo DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:23.594 15:09:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:28:23.594 15:09:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:23.594 15:09:09 -- host/auth.sh@68 -- # digest=sha512 00:28:23.594 15:09:09 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:23.594 15:09:09 -- host/auth.sh@68 -- # keyid=2 00:28:23.594 15:09:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:23.594 15:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.594 15:09:09 -- common/autotest_common.sh@10 -- # set +x 00:28:23.594 15:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.594 15:09:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:23.594 15:09:09 -- nvmf/common.sh@717 -- # local ip 00:28:23.594 15:09:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:23.594 15:09:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:23.594 15:09:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.594 15:09:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.594 15:09:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:23.594 15:09:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.594 15:09:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:23.594 15:09:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:23.594 15:09:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:23.594 15:09:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:23.594 15:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.594 15:09:09 -- common/autotest_common.sh@10 -- # set +x 00:28:24.160 nvme0n1 00:28:24.160 15:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.160 15:09:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.160 15:09:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:24.160 15:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.160 15:09:09 -- common/autotest_common.sh@10 -- # set +x 00:28:24.160 15:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.160 15:09:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.160 15:09:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.160 15:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.160 15:09:09 -- common/autotest_common.sh@10 -- # set +x 00:28:24.160 15:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.160 15:09:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:24.160 15:09:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:24.160 15:09:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:24.160 15:09:09 -- host/auth.sh@44 -- # digest=sha512 00:28:24.160 15:09:09 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:24.160 15:09:09 -- host/auth.sh@44 -- # keyid=3 00:28:24.160 15:09:09 -- host/auth.sh@45 -- # key=DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:24.160 15:09:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:24.160 15:09:09 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:24.160 15:09:09 -- host/auth.sh@49 -- # echo DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:24.160 15:09:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:28:24.160 15:09:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:24.160 15:09:09 -- host/auth.sh@68 -- # digest=sha512 00:28:24.160 15:09:09 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:24.160 15:09:09 -- host/auth.sh@68 -- # keyid=3 00:28:24.160 15:09:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:24.160 15:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.160 15:09:09 -- common/autotest_common.sh@10 -- # set +x 00:28:24.161 15:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.161 15:09:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:24.161 15:09:09 -- nvmf/common.sh@717 -- # local ip 00:28:24.161 15:09:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:24.161 15:09:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:24.161 15:09:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.161 15:09:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.161 15:09:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:24.161 15:09:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.161 15:09:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:24.161 15:09:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:24.161 15:09:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:24.161 15:09:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:24.161 15:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.161 15:09:09 -- common/autotest_common.sh@10 -- # set +x 00:28:24.728 nvme0n1 00:28:24.728 15:09:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.728 15:09:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.728 15:09:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.728 15:09:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:24.728 15:09:10 -- common/autotest_common.sh@10 -- # set +x 00:28:24.728 15:09:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.728 15:09:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.728 15:09:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.728 15:09:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.728 15:09:10 -- common/autotest_common.sh@10 -- # set +x 00:28:24.728 15:09:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.728 15:09:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:24.728 15:09:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:24.728 15:09:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:24.728 15:09:10 -- host/auth.sh@44 -- # digest=sha512 00:28:24.728 15:09:10 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:24.728 15:09:10 -- host/auth.sh@44 -- # keyid=4 00:28:24.728 15:09:10 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:24.728 15:09:10 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:24.728 15:09:10 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:24.728 15:09:10 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:24.728 15:09:10 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:28:24.728 15:09:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:24.728 15:09:10 -- host/auth.sh@68 -- # digest=sha512 00:28:24.728 15:09:10 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:24.728 15:09:10 -- host/auth.sh@68 -- # keyid=4 00:28:24.728 15:09:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:24.728 15:09:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.728 15:09:10 -- common/autotest_common.sh@10 -- # set +x 00:28:24.728 15:09:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.728 15:09:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:24.728 15:09:10 -- nvmf/common.sh@717 -- # local ip 00:28:24.728 15:09:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:24.728 15:09:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:24.728 15:09:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.728 15:09:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.728 15:09:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:24.728 15:09:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.728 15:09:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:24.728 15:09:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:24.728 15:09:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:24.728 15:09:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:24.728 15:09:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.728 15:09:10 -- common/autotest_common.sh@10 -- # set +x 00:28:25.294 nvme0n1 00:28:25.294 15:09:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.294 15:09:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.294 15:09:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.294 15:09:10 -- common/autotest_common.sh@10 -- # set +x 00:28:25.294 15:09:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:25.294 15:09:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.294 15:09:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.294 15:09:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.294 15:09:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.294 15:09:11 -- common/autotest_common.sh@10 -- # set +x 00:28:25.552 15:09:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.552 15:09:11 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:25.552 15:09:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:25.552 15:09:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:25.552 15:09:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:25.552 15:09:11 -- host/auth.sh@44 -- # digest=sha512 00:28:25.552 15:09:11 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:25.552 15:09:11 -- host/auth.sh@44 -- # keyid=0 00:28:25.552 15:09:11 -- host/auth.sh@45 -- # key=DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:25.552 15:09:11 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:25.552 15:09:11 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:25.552 15:09:11 -- host/auth.sh@49 -- # echo DHHC-1:00:NDE0ZTM4ZWZjNmMxYjRmMjY5NTMyYjQ5YjFkYWQyZTQyYkdB: 00:28:25.552 15:09:11 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:28:25.552 15:09:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:25.552 15:09:11 -- host/auth.sh@68 -- # digest=sha512 00:28:25.552 15:09:11 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:25.552 15:09:11 -- host/auth.sh@68 -- # keyid=0 00:28:25.552 15:09:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:25.552 15:09:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.552 15:09:11 -- common/autotest_common.sh@10 -- # set +x 00:28:25.552 15:09:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.552 15:09:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:25.552 15:09:11 -- nvmf/common.sh@717 -- # local ip 00:28:25.552 15:09:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:25.552 15:09:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:25.552 15:09:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.552 15:09:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.552 15:09:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:25.552 15:09:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.552 15:09:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:25.552 15:09:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:25.552 15:09:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:25.552 15:09:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:25.552 15:09:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.552 15:09:11 -- common/autotest_common.sh@10 -- # set +x 00:28:26.485 nvme0n1 00:28:26.485 15:09:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:26.485 15:09:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.485 15:09:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:26.485 15:09:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:26.485 15:09:12 -- common/autotest_common.sh@10 -- # set +x 00:28:26.485 15:09:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:26.485 15:09:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.485 15:09:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.485 15:09:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:26.485 15:09:12 -- common/autotest_common.sh@10 -- # set +x 00:28:26.485 15:09:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:26.485 15:09:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:26.485 15:09:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:26.485 15:09:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:26.485 15:09:12 -- host/auth.sh@44 -- # digest=sha512 00:28:26.485 15:09:12 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:26.485 15:09:12 -- host/auth.sh@44 -- # keyid=1 00:28:26.485 15:09:12 -- host/auth.sh@45 -- # key=DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:26.485 15:09:12 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:26.485 15:09:12 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:26.485 15:09:12 -- host/auth.sh@49 -- # echo DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:26.485 15:09:12 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:28:26.485 15:09:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:26.485 15:09:12 -- host/auth.sh@68 -- # digest=sha512 00:28:26.485 15:09:12 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:26.485 15:09:12 -- host/auth.sh@68 -- # keyid=1 00:28:26.485 15:09:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:26.485 15:09:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:26.485 15:09:12 -- common/autotest_common.sh@10 -- # set +x 00:28:26.485 15:09:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:26.485 15:09:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:26.485 15:09:12 -- nvmf/common.sh@717 -- # local ip 00:28:26.485 15:09:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:26.485 15:09:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:26.485 15:09:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.485 15:09:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.485 15:09:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:26.485 15:09:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.485 15:09:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:26.485 15:09:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:26.485 15:09:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:26.485 15:09:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:26.485 15:09:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:26.485 15:09:12 -- common/autotest_common.sh@10 -- # set +x 00:28:27.418 nvme0n1 00:28:27.418 15:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.418 15:09:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.418 15:09:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:27.418 15:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.418 15:09:13 -- common/autotest_common.sh@10 -- # set +x 00:28:27.418 15:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.418 15:09:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.418 15:09:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.418 15:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.418 15:09:13 -- common/autotest_common.sh@10 -- # set +x 00:28:27.418 15:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.418 15:09:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:27.418 15:09:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:27.418 15:09:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:27.418 15:09:13 -- host/auth.sh@44 -- # digest=sha512 00:28:27.418 15:09:13 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:27.418 15:09:13 -- host/auth.sh@44 -- # keyid=2 00:28:27.418 15:09:13 -- host/auth.sh@45 -- # key=DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:27.418 15:09:13 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:27.418 15:09:13 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:27.418 15:09:13 -- host/auth.sh@49 -- # echo DHHC-1:01:MTEyNmE3OGQyMGZkYmYwMTU1OWRkYjhkOGQ1YmI3ZDHaZ0m2: 00:28:27.418 15:09:13 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:28:27.418 15:09:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:27.418 15:09:13 -- host/auth.sh@68 -- # digest=sha512 00:28:27.418 15:09:13 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:27.418 15:09:13 -- host/auth.sh@68 -- # keyid=2 00:28:27.418 15:09:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:27.418 15:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.418 15:09:13 -- common/autotest_common.sh@10 -- # set +x 00:28:27.418 15:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.418 15:09:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:27.418 15:09:13 -- nvmf/common.sh@717 -- # local ip 00:28:27.418 15:09:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:27.418 15:09:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:27.418 15:09:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.418 15:09:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.418 15:09:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:27.418 15:09:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.418 15:09:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:27.418 15:09:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:27.418 15:09:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:27.418 15:09:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:27.418 15:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.418 15:09:13 -- common/autotest_common.sh@10 -- # set +x 00:28:28.354 nvme0n1 00:28:28.354 15:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:28.355 15:09:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.355 15:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:28.355 15:09:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:28.355 15:09:14 -- common/autotest_common.sh@10 -- # set +x 00:28:28.355 15:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:28.613 15:09:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.613 15:09:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.613 15:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:28.613 15:09:14 -- common/autotest_common.sh@10 -- # set +x 00:28:28.613 15:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:28.613 15:09:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:28.613 15:09:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:28.613 15:09:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:28.613 15:09:14 -- host/auth.sh@44 -- # digest=sha512 00:28:28.613 15:09:14 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:28.613 15:09:14 -- host/auth.sh@44 -- # keyid=3 00:28:28.613 15:09:14 -- host/auth.sh@45 -- # key=DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:28.613 15:09:14 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:28.613 15:09:14 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:28.613 15:09:14 -- host/auth.sh@49 -- # echo DHHC-1:02:YzdmMTFkZTlhNzRlMzRjNGQyMTYxODhlYzU5YTE5ODgyMDMwZTdiMWI2ZTAxMTgy6TiUyg==: 00:28:28.613 15:09:14 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:28:28.613 15:09:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:28.613 15:09:14 -- host/auth.sh@68 -- # digest=sha512 00:28:28.613 15:09:14 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:28.613 15:09:14 -- host/auth.sh@68 -- # keyid=3 00:28:28.613 15:09:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:28.613 15:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:28.613 15:09:14 -- common/autotest_common.sh@10 -- # set +x 00:28:28.613 15:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:28.613 15:09:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:28.613 15:09:14 -- nvmf/common.sh@717 -- # local ip 00:28:28.613 15:09:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:28.613 15:09:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:28.613 15:09:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.613 15:09:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.613 15:09:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:28.613 15:09:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.613 15:09:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:28.613 15:09:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:28.613 15:09:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:28.613 15:09:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:28.613 15:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:28.613 15:09:14 -- common/autotest_common.sh@10 -- # set +x 00:28:29.547 nvme0n1 00:28:29.547 15:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.547 15:09:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.547 15:09:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:29.547 15:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.547 15:09:15 -- common/autotest_common.sh@10 -- # set +x 00:28:29.547 15:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.547 15:09:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.547 15:09:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.547 15:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.547 15:09:15 -- common/autotest_common.sh@10 -- # set +x 00:28:29.547 15:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.547 15:09:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:29.547 15:09:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:29.547 15:09:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:29.547 15:09:15 -- host/auth.sh@44 -- # digest=sha512 00:28:29.547 15:09:15 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:29.547 15:09:15 -- host/auth.sh@44 -- # keyid=4 00:28:29.547 15:09:15 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:29.547 15:09:15 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:29.547 15:09:15 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:29.547 15:09:15 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjcwNWEyYzBmZWQyMTQ0NDczMmMyYTU2MjIwNmQzZmQ0ODM4NThlYmE2NTEzMmZmYmEwNjUzZmY5M2EyMjRlMTX62tE=: 00:28:29.547 15:09:15 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:28:29.547 15:09:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:29.547 15:09:15 -- host/auth.sh@68 -- # digest=sha512 00:28:29.547 15:09:15 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:29.547 15:09:15 -- host/auth.sh@68 -- # keyid=4 00:28:29.547 15:09:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:29.547 15:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.547 15:09:15 -- common/autotest_common.sh@10 -- # set +x 00:28:29.547 15:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.547 15:09:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:29.547 15:09:15 -- nvmf/common.sh@717 -- # local ip 00:28:29.547 15:09:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:29.548 15:09:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:29.548 15:09:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.548 15:09:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.548 15:09:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:29.548 15:09:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.548 15:09:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:29.548 15:09:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:29.548 15:09:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:29.548 15:09:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:29.548 15:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.548 15:09:15 -- common/autotest_common.sh@10 -- # set +x 00:28:30.481 nvme0n1 00:28:30.481 15:09:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.481 15:09:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.481 15:09:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:30.481 15:09:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.481 15:09:16 -- common/autotest_common.sh@10 -- # set +x 00:28:30.481 15:09:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.481 15:09:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.481 15:09:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.481 15:09:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.481 15:09:16 -- common/autotest_common.sh@10 -- # set +x 00:28:30.481 15:09:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.481 15:09:16 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:30.481 15:09:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:30.481 15:09:16 -- host/auth.sh@44 -- # digest=sha256 00:28:30.481 15:09:16 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:30.481 15:09:16 -- host/auth.sh@44 -- # keyid=1 00:28:30.481 15:09:16 -- host/auth.sh@45 -- # key=DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:30.481 15:09:16 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:30.481 15:09:16 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:30.481 15:09:16 -- host/auth.sh@49 -- # echo DHHC-1:00:OWRhMzJhZWUwOWVmNTVjODdhMTIwNmMxYTg5MjY5YzQwYmI1MGUwZWYyNWE1MGM2WdhNNA==: 00:28:30.481 15:09:16 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:30.481 15:09:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.481 15:09:16 -- common/autotest_common.sh@10 -- # set +x 00:28:30.481 15:09:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.481 15:09:16 -- host/auth.sh@119 -- # get_main_ns_ip 00:28:30.481 15:09:16 -- nvmf/common.sh@717 -- # local ip 00:28:30.481 15:09:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:30.481 15:09:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:30.481 15:09:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.481 15:09:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.481 15:09:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:30.481 15:09:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.481 15:09:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:30.481 15:09:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:30.481 15:09:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:30.481 15:09:16 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:30.481 15:09:16 -- common/autotest_common.sh@638 -- # local es=0 00:28:30.481 15:09:16 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:30.481 15:09:16 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:30.481 15:09:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:30.481 15:09:16 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:30.481 15:09:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:30.481 15:09:16 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:30.481 15:09:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.481 15:09:16 -- common/autotest_common.sh@10 -- # set +x 00:28:30.740 request: 00:28:30.740 { 00:28:30.740 "name": "nvme0", 00:28:30.740 "trtype": "tcp", 00:28:30.740 "traddr": "10.0.0.1", 00:28:30.740 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:30.740 "adrfam": "ipv4", 00:28:30.740 "trsvcid": "4420", 00:28:30.740 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:30.740 "method": "bdev_nvme_attach_controller", 00:28:30.740 "req_id": 1 00:28:30.740 } 00:28:30.740 Got JSON-RPC error response 00:28:30.740 response: 00:28:30.740 { 00:28:30.740 "code": -32602, 00:28:30.740 "message": "Invalid parameters" 00:28:30.740 } 00:28:30.740 15:09:16 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:30.740 15:09:16 -- common/autotest_common.sh@641 -- # es=1 00:28:30.740 15:09:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:30.740 15:09:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:30.740 15:09:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:30.740 15:09:16 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.740 15:09:16 -- host/auth.sh@121 -- # jq length 00:28:30.740 15:09:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.740 15:09:16 -- common/autotest_common.sh@10 -- # set +x 00:28:30.740 15:09:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.740 15:09:16 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:28:30.740 15:09:16 -- host/auth.sh@124 -- # get_main_ns_ip 00:28:30.740 15:09:16 -- nvmf/common.sh@717 -- # local ip 00:28:30.740 15:09:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:30.740 15:09:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:30.740 15:09:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.740 15:09:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.740 15:09:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:30.740 15:09:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.740 15:09:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:30.740 15:09:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:30.740 15:09:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:30.740 15:09:16 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:30.740 15:09:16 -- common/autotest_common.sh@638 -- # local es=0 00:28:30.740 15:09:16 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:30.740 15:09:16 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:30.740 15:09:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:30.740 15:09:16 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:30.740 15:09:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:30.741 15:09:16 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:30.741 15:09:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.741 15:09:16 -- common/autotest_common.sh@10 -- # set +x 00:28:30.741 request: 00:28:30.741 { 00:28:30.741 "name": "nvme0", 00:28:30.741 "trtype": "tcp", 00:28:30.741 "traddr": "10.0.0.1", 00:28:30.741 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:30.741 "adrfam": "ipv4", 00:28:30.741 "trsvcid": "4420", 00:28:30.741 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:30.741 "dhchap_key": "key2", 00:28:30.741 "method": "bdev_nvme_attach_controller", 00:28:30.741 "req_id": 1 00:28:30.741 } 00:28:30.741 Got JSON-RPC error response 00:28:30.741 response: 00:28:30.741 { 00:28:30.741 "code": -32602, 00:28:30.741 "message": "Invalid parameters" 00:28:30.741 } 00:28:30.741 15:09:16 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:30.741 15:09:16 -- common/autotest_common.sh@641 -- # es=1 00:28:30.741 15:09:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:30.741 15:09:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:30.741 15:09:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:30.741 15:09:16 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.741 15:09:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.741 15:09:16 -- common/autotest_common.sh@10 -- # set +x 00:28:30.741 15:09:16 -- host/auth.sh@127 -- # jq length 00:28:30.741 15:09:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.741 15:09:16 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:28:30.741 15:09:16 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:28:30.741 15:09:16 -- host/auth.sh@130 -- # cleanup 00:28:30.741 15:09:16 -- host/auth.sh@24 -- # nvmftestfini 00:28:30.741 15:09:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:30.741 15:09:16 -- nvmf/common.sh@117 -- # sync 00:28:30.741 15:09:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:30.741 15:09:16 -- nvmf/common.sh@120 -- # set +e 00:28:30.741 15:09:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:30.741 15:09:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:30.741 rmmod nvme_tcp 00:28:30.741 rmmod nvme_fabrics 00:28:30.741 15:09:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:30.741 15:09:16 -- nvmf/common.sh@124 -- # set -e 00:28:30.741 15:09:16 -- nvmf/common.sh@125 -- # return 0 00:28:30.741 15:09:16 -- nvmf/common.sh@478 -- # '[' -n 3887218 ']' 00:28:30.741 15:09:16 -- nvmf/common.sh@479 -- # killprocess 3887218 00:28:30.741 15:09:16 -- common/autotest_common.sh@936 -- # '[' -z 3887218 ']' 00:28:30.741 15:09:16 -- common/autotest_common.sh@940 -- # kill -0 3887218 00:28:30.741 15:09:16 -- common/autotest_common.sh@941 -- # uname 00:28:30.741 15:09:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:30.741 15:09:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3887218 00:28:30.741 15:09:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:30.741 15:09:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:30.741 15:09:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3887218' 00:28:30.741 killing process with pid 3887218 00:28:30.741 15:09:16 -- common/autotest_common.sh@955 -- # kill 3887218 00:28:30.741 15:09:16 -- common/autotest_common.sh@960 -- # wait 3887218 00:28:31.000 15:09:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:31.000 15:09:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:31.000 15:09:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:31.000 15:09:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:31.000 15:09:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:31.000 15:09:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.000 15:09:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:31.000 15:09:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.533 15:09:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:33.533 15:09:18 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:33.533 15:09:18 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:33.533 15:09:18 -- host/auth.sh@27 -- # clean_kernel_target 00:28:33.533 15:09:18 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:33.533 15:09:18 -- nvmf/common.sh@675 -- # echo 0 00:28:33.533 15:09:18 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:33.533 15:09:18 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:33.533 15:09:18 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:33.533 15:09:18 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:33.533 15:09:18 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:28:33.533 15:09:18 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:28:33.533 15:09:18 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:34.467 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:34.467 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:34.467 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:34.467 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:34.467 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:34.467 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:34.467 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:34.467 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:34.467 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:34.467 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:34.467 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:34.467 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:34.467 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:34.467 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:34.467 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:34.467 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:35.437 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:28:35.437 15:09:21 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.uuV /tmp/spdk.key-null.zuB /tmp/spdk.key-sha256.XZV /tmp/spdk.key-sha384.CIv /tmp/spdk.key-sha512.aL5 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:35.437 15:09:21 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:36.380 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:36.380 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:36.380 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:36.380 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:36.380 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:36.380 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:36.380 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:36.380 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:36.380 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:36.380 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:36.380 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:36.380 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:36.380 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:36.380 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:36.380 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:36.380 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:36.380 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:36.638 00:28:36.638 real 0m48.388s 00:28:36.638 user 0m45.987s 00:28:36.638 sys 0m5.528s 00:28:36.638 15:09:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:36.638 15:09:22 -- common/autotest_common.sh@10 -- # set +x 00:28:36.638 ************************************ 00:28:36.638 END TEST nvmf_auth 00:28:36.638 ************************************ 00:28:36.638 15:09:22 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:28:36.638 15:09:22 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:36.638 15:09:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:36.638 15:09:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:36.638 15:09:22 -- common/autotest_common.sh@10 -- # set +x 00:28:36.638 ************************************ 00:28:36.638 START TEST nvmf_digest 00:28:36.638 ************************************ 00:28:36.638 15:09:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:36.896 * Looking for test storage... 00:28:36.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:36.896 15:09:22 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:36.896 15:09:22 -- nvmf/common.sh@7 -- # uname -s 00:28:36.896 15:09:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:36.896 15:09:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:36.896 15:09:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:36.896 15:09:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:36.896 15:09:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:36.896 15:09:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:36.896 15:09:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:36.896 15:09:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:36.896 15:09:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:36.896 15:09:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:36.896 15:09:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:36.896 15:09:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:36.896 15:09:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:36.896 15:09:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:36.896 15:09:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:36.896 15:09:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:36.896 15:09:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:36.896 15:09:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:36.896 15:09:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:36.896 15:09:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:36.896 15:09:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.896 15:09:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.896 15:09:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.896 15:09:22 -- paths/export.sh@5 -- # export PATH 00:28:36.897 15:09:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.897 15:09:22 -- nvmf/common.sh@47 -- # : 0 00:28:36.897 15:09:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:36.897 15:09:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:36.897 15:09:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:36.897 15:09:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:36.897 15:09:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:36.897 15:09:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:36.897 15:09:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:36.897 15:09:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:36.897 15:09:22 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:36.897 15:09:22 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:36.897 15:09:22 -- host/digest.sh@16 -- # runtime=2 00:28:36.897 15:09:22 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:36.897 15:09:22 -- host/digest.sh@138 -- # nvmftestinit 00:28:36.897 15:09:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:36.897 15:09:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:36.897 15:09:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:36.897 15:09:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:36.897 15:09:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:36.897 15:09:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.897 15:09:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:36.897 15:09:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.897 15:09:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:36.897 15:09:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:36.897 15:09:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:36.897 15:09:22 -- common/autotest_common.sh@10 -- # set +x 00:28:38.799 15:09:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:38.799 15:09:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:38.799 15:09:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:38.799 15:09:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:38.799 15:09:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:38.799 15:09:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:38.799 15:09:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:38.799 15:09:24 -- nvmf/common.sh@295 -- # net_devs=() 00:28:38.799 15:09:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:38.799 15:09:24 -- nvmf/common.sh@296 -- # e810=() 00:28:38.799 15:09:24 -- nvmf/common.sh@296 -- # local -ga e810 00:28:38.799 15:09:24 -- nvmf/common.sh@297 -- # x722=() 00:28:38.799 15:09:24 -- nvmf/common.sh@297 -- # local -ga x722 00:28:38.799 15:09:24 -- nvmf/common.sh@298 -- # mlx=() 00:28:38.799 15:09:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:38.799 15:09:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:38.799 15:09:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:38.799 15:09:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:38.799 15:09:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:38.799 15:09:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:38.799 15:09:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:38.799 15:09:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:38.799 15:09:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:38.799 15:09:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:38.799 15:09:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:38.799 15:09:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:38.799 15:09:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:38.799 15:09:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:38.799 15:09:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:38.799 15:09:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:38.799 15:09:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:38.800 15:09:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:38.800 15:09:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:38.800 15:09:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:38.800 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:38.800 15:09:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:38.800 15:09:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:38.800 15:09:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.800 15:09:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.800 15:09:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:38.800 15:09:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:38.800 15:09:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:38.800 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:38.800 15:09:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:38.800 15:09:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:38.800 15:09:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.800 15:09:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.800 15:09:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:38.800 15:09:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:38.800 15:09:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:38.800 15:09:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:38.800 15:09:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:38.800 15:09:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.800 15:09:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:38.800 15:09:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.800 15:09:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:38.800 Found net devices under 0000:84:00.0: cvl_0_0 00:28:38.800 15:09:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.800 15:09:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:38.800 15:09:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.800 15:09:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:38.800 15:09:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.800 15:09:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:38.800 Found net devices under 0000:84:00.1: cvl_0_1 00:28:38.800 15:09:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.800 15:09:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:38.800 15:09:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:38.800 15:09:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:38.800 15:09:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:38.800 15:09:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:38.800 15:09:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:38.800 15:09:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:38.800 15:09:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:38.800 15:09:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:38.800 15:09:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:38.800 15:09:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:38.800 15:09:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:38.800 15:09:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:38.800 15:09:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.800 15:09:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:38.800 15:09:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:38.800 15:09:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:38.800 15:09:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.058 15:09:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.058 15:09:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.058 15:09:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:39.058 15:09:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.058 15:09:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.058 15:09:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.058 15:09:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:39.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:28:39.058 00:28:39.058 --- 10.0.0.2 ping statistics --- 00:28:39.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.058 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:28:39.058 15:09:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:28:39.058 00:28:39.058 --- 10.0.0.1 ping statistics --- 00:28:39.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.058 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:28:39.058 15:09:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.058 15:09:24 -- nvmf/common.sh@411 -- # return 0 00:28:39.058 15:09:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:39.058 15:09:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.058 15:09:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:39.058 15:09:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:39.058 15:09:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.058 15:09:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:39.058 15:09:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:39.058 15:09:24 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:39.058 15:09:24 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:39.058 15:09:24 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:39.058 15:09:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:39.058 15:09:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:39.058 15:09:24 -- common/autotest_common.sh@10 -- # set +x 00:28:39.058 ************************************ 00:28:39.058 START TEST nvmf_digest_clean 00:28:39.058 ************************************ 00:28:39.058 15:09:24 -- common/autotest_common.sh@1111 -- # run_digest 00:28:39.058 15:09:24 -- host/digest.sh@120 -- # local dsa_initiator 00:28:39.058 15:09:24 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:39.058 15:09:24 -- host/digest.sh@121 -- # dsa_initiator=false 00:28:39.058 15:09:24 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:39.058 15:09:24 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:39.058 15:09:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:39.058 15:09:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:39.058 15:09:24 -- common/autotest_common.sh@10 -- # set +x 00:28:39.058 15:09:24 -- nvmf/common.sh@470 -- # nvmfpid=3897170 00:28:39.058 15:09:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:39.058 15:09:24 -- nvmf/common.sh@471 -- # waitforlisten 3897170 00:28:39.058 15:09:24 -- common/autotest_common.sh@817 -- # '[' -z 3897170 ']' 00:28:39.058 15:09:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.058 15:09:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:39.058 15:09:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.058 15:09:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:39.058 15:09:24 -- common/autotest_common.sh@10 -- # set +x 00:28:39.317 [2024-04-26 15:09:24.805258] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:28:39.317 [2024-04-26 15:09:24.805338] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.317 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.317 [2024-04-26 15:09:24.849573] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:39.317 [2024-04-26 15:09:24.880187] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.317 [2024-04-26 15:09:24.970097] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.317 [2024-04-26 15:09:24.970150] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.317 [2024-04-26 15:09:24.970167] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.317 [2024-04-26 15:09:24.970181] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.317 [2024-04-26 15:09:24.970194] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.317 [2024-04-26 15:09:24.970224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.317 15:09:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:39.317 15:09:24 -- common/autotest_common.sh@850 -- # return 0 00:28:39.317 15:09:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:39.317 15:09:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:39.317 15:09:24 -- common/autotest_common.sh@10 -- # set +x 00:28:39.317 15:09:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.317 15:09:25 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:39.317 15:09:25 -- host/digest.sh@126 -- # common_target_config 00:28:39.317 15:09:25 -- host/digest.sh@43 -- # rpc_cmd 00:28:39.317 15:09:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:39.317 15:09:25 -- common/autotest_common.sh@10 -- # set +x 00:28:39.575 null0 00:28:39.575 [2024-04-26 15:09:25.138662] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.575 [2024-04-26 15:09:25.162916] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.575 15:09:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:39.575 15:09:25 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:39.575 15:09:25 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:39.575 15:09:25 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:39.575 15:09:25 -- host/digest.sh@80 -- # rw=randread 00:28:39.575 15:09:25 -- host/digest.sh@80 -- # bs=4096 00:28:39.575 15:09:25 -- host/digest.sh@80 -- # qd=128 00:28:39.575 15:09:25 -- host/digest.sh@80 -- # scan_dsa=false 00:28:39.575 15:09:25 -- host/digest.sh@83 -- # bperfpid=3897195 00:28:39.575 15:09:25 -- host/digest.sh@84 -- # waitforlisten 3897195 /var/tmp/bperf.sock 00:28:39.575 15:09:25 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:39.575 15:09:25 -- common/autotest_common.sh@817 -- # '[' -z 3897195 ']' 00:28:39.575 15:09:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:39.575 15:09:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:39.575 15:09:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:39.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:39.576 15:09:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:39.576 15:09:25 -- common/autotest_common.sh@10 -- # set +x 00:28:39.576 [2024-04-26 15:09:25.213139] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:28:39.576 [2024-04-26 15:09:25.213214] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897195 ] 00:28:39.576 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.576 [2024-04-26 15:09:25.251226] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:39.576 [2024-04-26 15:09:25.281552] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.834 [2024-04-26 15:09:25.373518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.834 15:09:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:39.834 15:09:25 -- common/autotest_common.sh@850 -- # return 0 00:28:39.834 15:09:25 -- host/digest.sh@86 -- # false 00:28:39.834 15:09:25 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:39.834 15:09:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:40.400 15:09:25 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.400 15:09:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.659 nvme0n1 00:28:40.659 15:09:26 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:40.659 15:09:26 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:40.659 Running I/O for 2 seconds... 00:28:43.189 00:28:43.189 Latency(us) 00:28:43.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.189 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:43.189 nvme0n1 : 2.00 17922.18 70.01 0.00 0.00 7132.70 3325.35 16505.36 00:28:43.189 =================================================================================================================== 00:28:43.189 Total : 17922.18 70.01 0.00 0.00 7132.70 3325.35 16505.36 00:28:43.189 0 00:28:43.189 15:09:28 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:43.189 15:09:28 -- host/digest.sh@93 -- # get_accel_stats 00:28:43.189 15:09:28 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:43.189 15:09:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:43.189 15:09:28 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:43.189 | select(.opcode=="crc32c") 00:28:43.189 | "\(.module_name) \(.executed)"' 00:28:43.189 15:09:28 -- host/digest.sh@94 -- # false 00:28:43.189 15:09:28 -- host/digest.sh@94 -- # exp_module=software 00:28:43.189 15:09:28 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:43.189 15:09:28 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:43.189 15:09:28 -- host/digest.sh@98 -- # killprocess 3897195 00:28:43.189 15:09:28 -- common/autotest_common.sh@936 -- # '[' -z 3897195 ']' 00:28:43.189 15:09:28 -- common/autotest_common.sh@940 -- # kill -0 3897195 00:28:43.189 15:09:28 -- common/autotest_common.sh@941 -- # uname 00:28:43.189 15:09:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:43.189 15:09:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3897195 00:28:43.189 15:09:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:43.189 15:09:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:43.189 15:09:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3897195' 00:28:43.189 killing process with pid 3897195 00:28:43.189 15:09:28 -- common/autotest_common.sh@955 -- # kill 3897195 00:28:43.189 Received shutdown signal, test time was about 2.000000 seconds 00:28:43.189 00:28:43.190 Latency(us) 00:28:43.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.190 =================================================================================================================== 00:28:43.190 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:43.190 15:09:28 -- common/autotest_common.sh@960 -- # wait 3897195 00:28:43.190 15:09:28 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:43.190 15:09:28 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:43.190 15:09:28 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:43.190 15:09:28 -- host/digest.sh@80 -- # rw=randread 00:28:43.190 15:09:28 -- host/digest.sh@80 -- # bs=131072 00:28:43.190 15:09:28 -- host/digest.sh@80 -- # qd=16 00:28:43.190 15:09:28 -- host/digest.sh@80 -- # scan_dsa=false 00:28:43.190 15:09:28 -- host/digest.sh@83 -- # bperfpid=3897607 00:28:43.190 15:09:28 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:43.190 15:09:28 -- host/digest.sh@84 -- # waitforlisten 3897607 /var/tmp/bperf.sock 00:28:43.190 15:09:28 -- common/autotest_common.sh@817 -- # '[' -z 3897607 ']' 00:28:43.190 15:09:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:43.190 15:09:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:43.190 15:09:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:43.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:43.190 15:09:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:43.190 15:09:28 -- common/autotest_common.sh@10 -- # set +x 00:28:43.190 [2024-04-26 15:09:28.923871] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:28:43.190 [2024-04-26 15:09:28.923942] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897607 ] 00:28:43.190 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:43.190 Zero copy mechanism will not be used. 00:28:43.449 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.449 [2024-04-26 15:09:28.956568] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:43.449 [2024-04-26 15:09:28.989252] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.449 [2024-04-26 15:09:29.080140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.449 15:09:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:43.449 15:09:29 -- common/autotest_common.sh@850 -- # return 0 00:28:43.449 15:09:29 -- host/digest.sh@86 -- # false 00:28:43.449 15:09:29 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:43.449 15:09:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:44.015 15:09:29 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.015 15:09:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.274 nvme0n1 00:28:44.274 15:09:29 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:44.274 15:09:29 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:44.274 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:44.274 Zero copy mechanism will not be used. 00:28:44.274 Running I/O for 2 seconds... 00:28:46.805 00:28:46.805 Latency(us) 00:28:46.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.805 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:46.805 nvme0n1 : 2.00 3900.22 487.53 0.00 0.00 4097.60 861.68 11019.76 00:28:46.805 =================================================================================================================== 00:28:46.805 Total : 3900.22 487.53 0.00 0.00 4097.60 861.68 11019.76 00:28:46.805 0 00:28:46.805 15:09:32 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:46.805 15:09:32 -- host/digest.sh@93 -- # get_accel_stats 00:28:46.805 15:09:32 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:46.805 15:09:32 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:46.805 | select(.opcode=="crc32c") 00:28:46.805 | "\(.module_name) \(.executed)"' 00:28:46.805 15:09:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:46.805 15:09:32 -- host/digest.sh@94 -- # false 00:28:46.805 15:09:32 -- host/digest.sh@94 -- # exp_module=software 00:28:46.805 15:09:32 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:46.805 15:09:32 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:46.805 15:09:32 -- host/digest.sh@98 -- # killprocess 3897607 00:28:46.805 15:09:32 -- common/autotest_common.sh@936 -- # '[' -z 3897607 ']' 00:28:46.805 15:09:32 -- common/autotest_common.sh@940 -- # kill -0 3897607 00:28:46.805 15:09:32 -- common/autotest_common.sh@941 -- # uname 00:28:46.805 15:09:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:46.805 15:09:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3897607 00:28:46.805 15:09:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:46.805 15:09:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:46.805 15:09:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3897607' 00:28:46.805 killing process with pid 3897607 00:28:46.805 15:09:32 -- common/autotest_common.sh@955 -- # kill 3897607 00:28:46.805 Received shutdown signal, test time was about 2.000000 seconds 00:28:46.805 00:28:46.805 Latency(us) 00:28:46.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.805 =================================================================================================================== 00:28:46.805 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:46.805 15:09:32 -- common/autotest_common.sh@960 -- # wait 3897607 00:28:46.805 15:09:32 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:46.805 15:09:32 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:46.805 15:09:32 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:46.805 15:09:32 -- host/digest.sh@80 -- # rw=randwrite 00:28:46.805 15:09:32 -- host/digest.sh@80 -- # bs=4096 00:28:46.805 15:09:32 -- host/digest.sh@80 -- # qd=128 00:28:46.805 15:09:32 -- host/digest.sh@80 -- # scan_dsa=false 00:28:46.805 15:09:32 -- host/digest.sh@83 -- # bperfpid=3898131 00:28:46.805 15:09:32 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:46.805 15:09:32 -- host/digest.sh@84 -- # waitforlisten 3898131 /var/tmp/bperf.sock 00:28:46.805 15:09:32 -- common/autotest_common.sh@817 -- # '[' -z 3898131 ']' 00:28:46.805 15:09:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:46.805 15:09:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:46.805 15:09:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:46.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:46.805 15:09:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:46.805 15:09:32 -- common/autotest_common.sh@10 -- # set +x 00:28:46.805 [2024-04-26 15:09:32.519137] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:28:46.805 [2024-04-26 15:09:32.519210] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898131 ] 00:28:47.063 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.063 [2024-04-26 15:09:32.549859] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:47.063 [2024-04-26 15:09:32.577299] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.063 [2024-04-26 15:09:32.661126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.063 15:09:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:47.063 15:09:32 -- common/autotest_common.sh@850 -- # return 0 00:28:47.063 15:09:32 -- host/digest.sh@86 -- # false 00:28:47.063 15:09:32 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:47.063 15:09:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:47.630 15:09:33 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.630 15:09:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.630 nvme0n1 00:28:47.887 15:09:33 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:47.887 15:09:33 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:47.887 Running I/O for 2 seconds... 00:28:49.789 00:28:49.789 Latency(us) 00:28:49.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.789 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:49.789 nvme0n1 : 2.01 19345.39 75.57 0.00 0.00 6601.24 5873.97 13883.92 00:28:49.789 =================================================================================================================== 00:28:49.789 Total : 19345.39 75.57 0.00 0.00 6601.24 5873.97 13883.92 00:28:49.789 0 00:28:49.789 15:09:35 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:49.789 15:09:35 -- host/digest.sh@93 -- # get_accel_stats 00:28:49.789 15:09:35 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:49.789 15:09:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:49.789 15:09:35 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:49.789 | select(.opcode=="crc32c") 00:28:49.789 | "\(.module_name) \(.executed)"' 00:28:50.355 15:09:35 -- host/digest.sh@94 -- # false 00:28:50.355 15:09:35 -- host/digest.sh@94 -- # exp_module=software 00:28:50.355 15:09:35 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:50.355 15:09:35 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:50.355 15:09:35 -- host/digest.sh@98 -- # killprocess 3898131 00:28:50.355 15:09:35 -- common/autotest_common.sh@936 -- # '[' -z 3898131 ']' 00:28:50.355 15:09:35 -- common/autotest_common.sh@940 -- # kill -0 3898131 00:28:50.355 15:09:35 -- common/autotest_common.sh@941 -- # uname 00:28:50.355 15:09:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:50.355 15:09:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3898131 00:28:50.355 15:09:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:50.355 15:09:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:50.355 15:09:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3898131' 00:28:50.355 killing process with pid 3898131 00:28:50.355 15:09:35 -- common/autotest_common.sh@955 -- # kill 3898131 00:28:50.355 Received shutdown signal, test time was about 2.000000 seconds 00:28:50.355 00:28:50.355 Latency(us) 00:28:50.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.355 =================================================================================================================== 00:28:50.355 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:50.355 15:09:35 -- common/autotest_common.sh@960 -- # wait 3898131 00:28:50.355 15:09:36 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:50.355 15:09:36 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:50.355 15:09:36 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:50.355 15:09:36 -- host/digest.sh@80 -- # rw=randwrite 00:28:50.355 15:09:36 -- host/digest.sh@80 -- # bs=131072 00:28:50.355 15:09:36 -- host/digest.sh@80 -- # qd=16 00:28:50.355 15:09:36 -- host/digest.sh@80 -- # scan_dsa=false 00:28:50.355 15:09:36 -- host/digest.sh@83 -- # bperfpid=3898535 00:28:50.355 15:09:36 -- host/digest.sh@84 -- # waitforlisten 3898535 /var/tmp/bperf.sock 00:28:50.355 15:09:36 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:50.355 15:09:36 -- common/autotest_common.sh@817 -- # '[' -z 3898535 ']' 00:28:50.355 15:09:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:50.355 15:09:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:50.355 15:09:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:50.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:50.355 15:09:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:50.355 15:09:36 -- common/autotest_common.sh@10 -- # set +x 00:28:50.614 [2024-04-26 15:09:36.105228] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:28:50.614 [2024-04-26 15:09:36.105297] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898535 ] 00:28:50.614 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:50.614 Zero copy mechanism will not be used. 00:28:50.614 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.614 [2024-04-26 15:09:36.136112] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:50.614 [2024-04-26 15:09:36.164055] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.614 [2024-04-26 15:09:36.249654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.614 15:09:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:50.614 15:09:36 -- common/autotest_common.sh@850 -- # return 0 00:28:50.614 15:09:36 -- host/digest.sh@86 -- # false 00:28:50.614 15:09:36 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:50.614 15:09:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:51.180 15:09:36 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.180 15:09:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.439 nvme0n1 00:28:51.439 15:09:36 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:51.439 15:09:36 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:51.439 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:51.439 Zero copy mechanism will not be used. 00:28:51.439 Running I/O for 2 seconds... 00:28:53.367 00:28:53.367 Latency(us) 00:28:53.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.367 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:53.367 nvme0n1 : 2.00 4317.13 539.64 0.00 0.00 3697.35 2657.85 7670.14 00:28:53.367 =================================================================================================================== 00:28:53.367 Total : 4317.13 539.64 0.00 0.00 3697.35 2657.85 7670.14 00:28:53.367 0 00:28:53.367 15:09:39 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:53.625 15:09:39 -- host/digest.sh@93 -- # get_accel_stats 00:28:53.625 15:09:39 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:53.625 15:09:39 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:53.625 | select(.opcode=="crc32c") 00:28:53.625 | "\(.module_name) \(.executed)"' 00:28:53.625 15:09:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:53.625 15:09:39 -- host/digest.sh@94 -- # false 00:28:53.625 15:09:39 -- host/digest.sh@94 -- # exp_module=software 00:28:53.625 15:09:39 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:53.625 15:09:39 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:53.625 15:09:39 -- host/digest.sh@98 -- # killprocess 3898535 00:28:53.625 15:09:39 -- common/autotest_common.sh@936 -- # '[' -z 3898535 ']' 00:28:53.625 15:09:39 -- common/autotest_common.sh@940 -- # kill -0 3898535 00:28:53.625 15:09:39 -- common/autotest_common.sh@941 -- # uname 00:28:53.625 15:09:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:53.625 15:09:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3898535 00:28:53.883 15:09:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:53.883 15:09:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:53.883 15:09:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3898535' 00:28:53.883 killing process with pid 3898535 00:28:53.883 15:09:39 -- common/autotest_common.sh@955 -- # kill 3898535 00:28:53.883 Received shutdown signal, test time was about 2.000000 seconds 00:28:53.883 00:28:53.883 Latency(us) 00:28:53.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.883 =================================================================================================================== 00:28:53.883 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:53.883 15:09:39 -- common/autotest_common.sh@960 -- # wait 3898535 00:28:53.883 15:09:39 -- host/digest.sh@132 -- # killprocess 3897170 00:28:53.883 15:09:39 -- common/autotest_common.sh@936 -- # '[' -z 3897170 ']' 00:28:53.883 15:09:39 -- common/autotest_common.sh@940 -- # kill -0 3897170 00:28:53.883 15:09:39 -- common/autotest_common.sh@941 -- # uname 00:28:53.883 15:09:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:53.883 15:09:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3897170 00:28:54.141 15:09:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:54.141 15:09:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:54.141 15:09:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3897170' 00:28:54.141 killing process with pid 3897170 00:28:54.141 15:09:39 -- common/autotest_common.sh@955 -- # kill 3897170 00:28:54.141 15:09:39 -- common/autotest_common.sh@960 -- # wait 3897170 00:28:54.141 00:28:54.141 real 0m15.124s 00:28:54.141 user 0m29.456s 00:28:54.141 sys 0m4.686s 00:28:54.141 15:09:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:54.141 15:09:39 -- common/autotest_common.sh@10 -- # set +x 00:28:54.141 ************************************ 00:28:54.141 END TEST nvmf_digest_clean 00:28:54.141 ************************************ 00:28:54.400 15:09:39 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:54.400 15:09:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:54.400 15:09:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:54.400 15:09:39 -- common/autotest_common.sh@10 -- # set +x 00:28:54.400 ************************************ 00:28:54.400 START TEST nvmf_digest_error 00:28:54.400 ************************************ 00:28:54.400 15:09:40 -- common/autotest_common.sh@1111 -- # run_digest_error 00:28:54.400 15:09:40 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:54.400 15:09:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:54.400 15:09:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:54.400 15:09:40 -- common/autotest_common.sh@10 -- # set +x 00:28:54.400 15:09:40 -- nvmf/common.sh@470 -- # nvmfpid=3898974 00:28:54.400 15:09:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:54.400 15:09:40 -- nvmf/common.sh@471 -- # waitforlisten 3898974 00:28:54.400 15:09:40 -- common/autotest_common.sh@817 -- # '[' -z 3898974 ']' 00:28:54.400 15:09:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.400 15:09:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:54.400 15:09:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.400 15:09:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:54.400 15:09:40 -- common/autotest_common.sh@10 -- # set +x 00:28:54.400 [2024-04-26 15:09:40.059154] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:28:54.400 [2024-04-26 15:09:40.059265] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.400 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.400 [2024-04-26 15:09:40.099321] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:54.400 [2024-04-26 15:09:40.126588] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.658 [2024-04-26 15:09:40.215806] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.658 [2024-04-26 15:09:40.215869] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.658 [2024-04-26 15:09:40.215900] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.658 [2024-04-26 15:09:40.215912] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.658 [2024-04-26 15:09:40.215923] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.658 [2024-04-26 15:09:40.215965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.658 15:09:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:54.658 15:09:40 -- common/autotest_common.sh@850 -- # return 0 00:28:54.658 15:09:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:54.658 15:09:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:54.658 15:09:40 -- common/autotest_common.sh@10 -- # set +x 00:28:54.658 15:09:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.658 15:09:40 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:54.658 15:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:54.658 15:09:40 -- common/autotest_common.sh@10 -- # set +x 00:28:54.658 [2024-04-26 15:09:40.304679] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:54.658 15:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:54.658 15:09:40 -- host/digest.sh@105 -- # common_target_config 00:28:54.658 15:09:40 -- host/digest.sh@43 -- # rpc_cmd 00:28:54.658 15:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:54.658 15:09:40 -- common/autotest_common.sh@10 -- # set +x 00:28:54.917 null0 00:28:54.917 [2024-04-26 15:09:40.414447] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.917 [2024-04-26 15:09:40.438676] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.917 15:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:54.917 15:09:40 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:54.917 15:09:40 -- host/digest.sh@54 -- # local rw bs qd 00:28:54.917 15:09:40 -- host/digest.sh@56 -- # rw=randread 00:28:54.917 15:09:40 -- host/digest.sh@56 -- # bs=4096 00:28:54.917 15:09:40 -- host/digest.sh@56 -- # qd=128 00:28:54.917 15:09:40 -- host/digest.sh@58 -- # bperfpid=3899120 00:28:54.917 15:09:40 -- host/digest.sh@60 -- # waitforlisten 3899120 /var/tmp/bperf.sock 00:28:54.917 15:09:40 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:54.917 15:09:40 -- common/autotest_common.sh@817 -- # '[' -z 3899120 ']' 00:28:54.917 15:09:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:54.917 15:09:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:54.917 15:09:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:54.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:54.917 15:09:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:54.917 15:09:40 -- common/autotest_common.sh@10 -- # set +x 00:28:54.917 [2024-04-26 15:09:40.487295] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:28:54.917 [2024-04-26 15:09:40.487378] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899120 ] 00:28:54.917 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.917 [2024-04-26 15:09:40.522271] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:54.917 [2024-04-26 15:09:40.552941] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.917 [2024-04-26 15:09:40.643195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.175 15:09:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:55.175 15:09:40 -- common/autotest_common.sh@850 -- # return 0 00:28:55.175 15:09:40 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:55.175 15:09:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:55.433 15:09:40 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:55.433 15:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.433 15:09:40 -- common/autotest_common.sh@10 -- # set +x 00:28:55.433 15:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.433 15:09:40 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:55.433 15:09:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:55.691 nvme0n1 00:28:55.691 15:09:41 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:55.691 15:09:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.691 15:09:41 -- common/autotest_common.sh@10 -- # set +x 00:28:55.691 15:09:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.691 15:09:41 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:55.691 15:09:41 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:55.949 Running I/O for 2 seconds... 00:28:55.949 [2024-04-26 15:09:41.476867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:55.949 [2024-04-26 15:09:41.476918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.949 [2024-04-26 15:09:41.476949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.949 [2024-04-26 15:09:41.493570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:55.949 [2024-04-26 15:09:41.493616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.949 [2024-04-26 15:09:41.493637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.949 [2024-04-26 15:09:41.509844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:55.949 [2024-04-26 15:09:41.509881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.949 [2024-04-26 15:09:41.509901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.949 [2024-04-26 15:09:41.522818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:55.949 [2024-04-26 15:09:41.522854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.949 [2024-04-26 15:09:41.522874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.949 [2024-04-26 15:09:41.536876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:55.949 [2024-04-26 15:09:41.536912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.949 [2024-04-26 15:09:41.536932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.949 [2024-04-26 15:09:41.550530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:55.949 [2024-04-26 15:09:41.550564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.949 [2024-04-26 15:09:41.550583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.949 [2024-04-26 15:09:41.567360] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:55.949 [2024-04-26 15:09:41.567388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.949 [2024-04-26 15:09:41.567418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.949 [2024-04-26 15:09:41.579574] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:55.949 [2024-04-26 15:09:41.579609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.949 [2024-04-26 15:09:41.579628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.949 [2024-04-26 15:09:41.594600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:55.949 [2024-04-26 15:09:41.594636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.949 [2024-04-26 15:09:41.594656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.949 [2024-04-26 15:09:41.606266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:55.949 [2024-04-26 15:09:41.606299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.949 [2024-04-26 15:09:41.606317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.949 [2024-04-26 15:09:41.621563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:55.949 [2024-04-26 15:09:41.621599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.949 [2024-04-26 15:09:41.621618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.949 [2024-04-26 15:09:41.633137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:55.949 [2024-04-26 15:09:41.633166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.949 [2024-04-26 15:09:41.633183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.949 [2024-04-26 15:09:41.649609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:55.950 [2024-04-26 15:09:41.649645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.950 [2024-04-26 15:09:41.649665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.950 [2024-04-26 15:09:41.663626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:55.950 [2024-04-26 15:09:41.663663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.950 [2024-04-26 15:09:41.663683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.950 [2024-04-26 15:09:41.680043] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:55.950 [2024-04-26 15:09:41.680088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.950 [2024-04-26 15:09:41.680104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.208 [2024-04-26 15:09:41.691732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.208 [2024-04-26 15:09:41.691768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.208 [2024-04-26 15:09:41.691788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.208 [2024-04-26 15:09:41.707447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.208 [2024-04-26 15:09:41.707484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.208 [2024-04-26 15:09:41.707504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.208 [2024-04-26 15:09:41.721193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.208 [2024-04-26 15:09:41.721222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.208 [2024-04-26 15:09:41.721239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.208 [2024-04-26 15:09:41.733157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.208 [2024-04-26 15:09:41.733186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.208 [2024-04-26 15:09:41.733202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.208 [2024-04-26 15:09:41.748691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.208 [2024-04-26 15:09:41.748726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.208 [2024-04-26 15:09:41.748747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.208 [2024-04-26 15:09:41.763396] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.208 [2024-04-26 15:09:41.763431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.208 [2024-04-26 15:09:41.763451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.208 [2024-04-26 15:09:41.775632] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.208 [2024-04-26 15:09:41.775667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.208 [2024-04-26 15:09:41.775687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.208 [2024-04-26 15:09:41.790896] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.208 [2024-04-26 15:09:41.790931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.208 [2024-04-26 15:09:41.790951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.208 [2024-04-26 15:09:41.805835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.208 [2024-04-26 15:09:41.805870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.208 [2024-04-26 15:09:41.805891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.209 [2024-04-26 15:09:41.818698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.209 [2024-04-26 15:09:41.818732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.209 [2024-04-26 15:09:41.818751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.209 [2024-04-26 15:09:41.834591] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.209 [2024-04-26 15:09:41.834626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.209 [2024-04-26 15:09:41.834646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.209 [2024-04-26 15:09:41.849255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.209 [2024-04-26 15:09:41.849284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.209 [2024-04-26 15:09:41.849308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.209 [2024-04-26 15:09:41.862802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.209 [2024-04-26 15:09:41.862837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.209 [2024-04-26 15:09:41.862857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.209 [2024-04-26 15:09:41.875120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.209 [2024-04-26 15:09:41.875154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.209 [2024-04-26 15:09:41.875174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.209 [2024-04-26 15:09:41.890336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.209 [2024-04-26 15:09:41.890372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.209 [2024-04-26 15:09:41.890391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.209 [2024-04-26 15:09:41.904951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.209 [2024-04-26 15:09:41.904987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.209 [2024-04-26 15:09:41.905006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.209 [2024-04-26 15:09:41.917087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.209 [2024-04-26 15:09:41.917121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.209 [2024-04-26 15:09:41.917141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.209 [2024-04-26 15:09:41.932125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.209 [2024-04-26 15:09:41.932154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.209 [2024-04-26 15:09:41.932170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.209 [2024-04-26 15:09:41.946484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.209 [2024-04-26 15:09:41.946524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.209 [2024-04-26 15:09:41.946546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.468 [2024-04-26 15:09:41.961414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.468 [2024-04-26 15:09:41.961452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.468 [2024-04-26 15:09:41.961474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.468 [2024-04-26 15:09:41.974507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.468 [2024-04-26 15:09:41.974548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.468 [2024-04-26 15:09:41.974569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.468 [2024-04-26 15:09:41.990006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.468 [2024-04-26 15:09:41.990067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.468 [2024-04-26 15:09:41.990085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.468 [2024-04-26 15:09:42.007167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.468 [2024-04-26 15:09:42.007197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.468 [2024-04-26 15:09:42.007215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.468 [2024-04-26 15:09:42.019111] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.468 [2024-04-26 15:09:42.019139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.468 [2024-04-26 15:09:42.019156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.468 [2024-04-26 15:09:42.035030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.468 [2024-04-26 15:09:42.035079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.468 [2024-04-26 15:09:42.035096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.468 [2024-04-26 15:09:42.048240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.468 [2024-04-26 15:09:42.048284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.468 [2024-04-26 15:09:42.048302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.468 [2024-04-26 15:09:42.060892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.468 [2024-04-26 15:09:42.060921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.468 [2024-04-26 15:09:42.060954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.468 [2024-04-26 15:09:42.072908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.468 [2024-04-26 15:09:42.072937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.468 [2024-04-26 15:09:42.072969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.468 [2024-04-26 15:09:42.085443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.468 [2024-04-26 15:09:42.085471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.468 [2024-04-26 15:09:42.085503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.468 [2024-04-26 15:09:42.097096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.468 [2024-04-26 15:09:42.097125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.468 [2024-04-26 15:09:42.097143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.468 [2024-04-26 15:09:42.108256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.468 [2024-04-26 15:09:42.108285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.468 [2024-04-26 15:09:42.108315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.468 [2024-04-26 15:09:42.122431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.468 [2024-04-26 15:09:42.122475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.468 [2024-04-26 15:09:42.122491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.468 [2024-04-26 15:09:42.135964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.468 [2024-04-26 15:09:42.135992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.468 [2024-04-26 15:09:42.136030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.468 [2024-04-26 15:09:42.147598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.468 [2024-04-26 15:09:42.147626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.468 [2024-04-26 15:09:42.147658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.468 [2024-04-26 15:09:42.158748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.468 [2024-04-26 15:09:42.158776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.468 [2024-04-26 15:09:42.158808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.468 [2024-04-26 15:09:42.171419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.468 [2024-04-26 15:09:42.171447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.468 [2024-04-26 15:09:42.171477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.468 [2024-04-26 15:09:42.184726] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.468 [2024-04-26 15:09:42.184753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.468 [2024-04-26 15:09:42.184785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.468 [2024-04-26 15:09:42.196527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.468 [2024-04-26 15:09:42.196555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.468 [2024-04-26 15:09:42.196591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.727 [2024-04-26 15:09:42.208167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.727 [2024-04-26 15:09:42.208203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.727 [2024-04-26 15:09:42.208222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.727 [2024-04-26 15:09:42.221422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.727 [2024-04-26 15:09:42.221453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.727 [2024-04-26 15:09:42.221485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.727 [2024-04-26 15:09:42.234122] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.727 [2024-04-26 15:09:42.234150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.727 [2024-04-26 15:09:42.234182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.727 [2024-04-26 15:09:42.245113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.727 [2024-04-26 15:09:42.245143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.727 [2024-04-26 15:09:42.245160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.727 [2024-04-26 15:09:42.257645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.727 [2024-04-26 15:09:42.257673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.727 [2024-04-26 15:09:42.257705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.727 [2024-04-26 15:09:42.270456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.727 [2024-04-26 15:09:42.270484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.727 [2024-04-26 15:09:42.270515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.727 [2024-04-26 15:09:42.284046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.727 [2024-04-26 15:09:42.284076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.727 [2024-04-26 15:09:42.284092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.727 [2024-04-26 15:09:42.295742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.727 [2024-04-26 15:09:42.295770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.727 [2024-04-26 15:09:42.295802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.727 [2024-04-26 15:09:42.307763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.727 [2024-04-26 15:09:42.307790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.727 [2024-04-26 15:09:42.307822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.727 [2024-04-26 15:09:42.319922] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.727 [2024-04-26 15:09:42.319949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.727 [2024-04-26 15:09:42.319980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.727 [2024-04-26 15:09:42.332888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.727 [2024-04-26 15:09:42.332916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.727 [2024-04-26 15:09:42.332947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.727 [2024-04-26 15:09:42.344077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.727 [2024-04-26 15:09:42.344106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.727 [2024-04-26 15:09:42.344122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.728 [2024-04-26 15:09:42.358330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.728 [2024-04-26 15:09:42.358360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.728 [2024-04-26 15:09:42.358377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.728 [2024-04-26 15:09:42.368736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.728 [2024-04-26 15:09:42.368764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.728 [2024-04-26 15:09:42.368796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.728 [2024-04-26 15:09:42.383762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.728 [2024-04-26 15:09:42.383796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.728 [2024-04-26 15:09:42.383828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.728 [2024-04-26 15:09:42.398945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.728 [2024-04-26 15:09:42.398973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.728 [2024-04-26 15:09:42.399005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.728 [2024-04-26 15:09:42.414184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.728 [2024-04-26 15:09:42.414213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.728 [2024-04-26 15:09:42.414236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.728 [2024-04-26 15:09:42.424415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.728 [2024-04-26 15:09:42.424444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.728 [2024-04-26 15:09:42.424475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.728 [2024-04-26 15:09:42.438551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.728 [2024-04-26 15:09:42.438581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.728 [2024-04-26 15:09:42.438612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.728 [2024-04-26 15:09:42.454260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.728 [2024-04-26 15:09:42.454298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.728 [2024-04-26 15:09:42.454331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.987 [2024-04-26 15:09:42.469200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.987 [2024-04-26 15:09:42.469233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.987 [2024-04-26 15:09:42.469251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.987 [2024-04-26 15:09:42.479930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.987 [2024-04-26 15:09:42.479962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.987 [2024-04-26 15:09:42.479997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.987 [2024-04-26 15:09:42.494370] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.987 [2024-04-26 15:09:42.494399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.987 [2024-04-26 15:09:42.494430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.987 [2024-04-26 15:09:42.507735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.987 [2024-04-26 15:09:42.507765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.987 [2024-04-26 15:09:42.507796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.987 [2024-04-26 15:09:42.521199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.987 [2024-04-26 15:09:42.521230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.987 [2024-04-26 15:09:42.521247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.987 [2024-04-26 15:09:42.533693] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.987 [2024-04-26 15:09:42.533727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.987 [2024-04-26 15:09:42.533759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.987 [2024-04-26 15:09:42.545910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.987 [2024-04-26 15:09:42.545939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.987 [2024-04-26 15:09:42.545969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.987 [2024-04-26 15:09:42.556743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.987 [2024-04-26 15:09:42.556772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.987 [2024-04-26 15:09:42.556808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.987 [2024-04-26 15:09:42.570415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.987 [2024-04-26 15:09:42.570444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.987 [2024-04-26 15:09:42.570475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.987 [2024-04-26 15:09:42.581992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.987 [2024-04-26 15:09:42.582042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.987 [2024-04-26 15:09:42.582064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.987 [2024-04-26 15:09:42.595860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.987 [2024-04-26 15:09:42.595888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.987 [2024-04-26 15:09:42.595919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.987 [2024-04-26 15:09:42.608540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.988 [2024-04-26 15:09:42.608568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.988 [2024-04-26 15:09:42.608603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.988 [2024-04-26 15:09:42.619678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.988 [2024-04-26 15:09:42.619707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.988 [2024-04-26 15:09:42.619738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.988 [2024-04-26 15:09:42.633843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.988 [2024-04-26 15:09:42.633872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.988 [2024-04-26 15:09:42.633903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.988 [2024-04-26 15:09:42.646089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.988 [2024-04-26 15:09:42.646118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.988 [2024-04-26 15:09:42.646135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.988 [2024-04-26 15:09:42.658846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.988 [2024-04-26 15:09:42.658874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.988 [2024-04-26 15:09:42.658906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.988 [2024-04-26 15:09:42.669731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.988 [2024-04-26 15:09:42.669758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.988 [2024-04-26 15:09:42.669789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.988 [2024-04-26 15:09:42.681602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.988 [2024-04-26 15:09:42.681630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.988 [2024-04-26 15:09:42.681661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.988 [2024-04-26 15:09:42.695369] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.988 [2024-04-26 15:09:42.695397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.988 [2024-04-26 15:09:42.695428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.988 [2024-04-26 15:09:42.707959] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.988 [2024-04-26 15:09:42.707986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.988 [2024-04-26 15:09:42.708017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.988 [2024-04-26 15:09:42.718706] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:56.988 [2024-04-26 15:09:42.718734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.988 [2024-04-26 15:09:42.718765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.247 [2024-04-26 15:09:42.733686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.247 [2024-04-26 15:09:42.733717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.247 [2024-04-26 15:09:42.733749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.247 [2024-04-26 15:09:42.743964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.247 [2024-04-26 15:09:42.743993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.247 [2024-04-26 15:09:42.744042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.247 [2024-04-26 15:09:42.756770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.247 [2024-04-26 15:09:42.756800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.247 [2024-04-26 15:09:42.756831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.247 [2024-04-26 15:09:42.769008] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.247 [2024-04-26 15:09:42.769057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.247 [2024-04-26 15:09:42.769075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.247 [2024-04-26 15:09:42.782654] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.247 [2024-04-26 15:09:42.782683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.247 [2024-04-26 15:09:42.782714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.247 [2024-04-26 15:09:42.795103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.247 [2024-04-26 15:09:42.795132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.247 [2024-04-26 15:09:42.795148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.247 [2024-04-26 15:09:42.805928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.247 [2024-04-26 15:09:42.805955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.247 [2024-04-26 15:09:42.805987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.247 [2024-04-26 15:09:42.818053] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.247 [2024-04-26 15:09:42.818082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.247 [2024-04-26 15:09:42.818098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.247 [2024-04-26 15:09:42.829512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.247 [2024-04-26 15:09:42.829539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.247 [2024-04-26 15:09:42.829571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.247 [2024-04-26 15:09:42.841410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.247 [2024-04-26 15:09:42.841438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.247 [2024-04-26 15:09:42.841470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.248 [2024-04-26 15:09:42.855110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.248 [2024-04-26 15:09:42.855148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.248 [2024-04-26 15:09:42.855166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.248 [2024-04-26 15:09:42.867343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.248 [2024-04-26 15:09:42.867373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.248 [2024-04-26 15:09:42.867389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.248 [2024-04-26 15:09:42.879229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.248 [2024-04-26 15:09:42.879259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.248 [2024-04-26 15:09:42.879277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.248 [2024-04-26 15:09:42.889946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.248 [2024-04-26 15:09:42.889974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.248 [2024-04-26 15:09:42.890006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.248 [2024-04-26 15:09:42.903796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.248 [2024-04-26 15:09:42.903823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.248 [2024-04-26 15:09:42.903855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.248 [2024-04-26 15:09:42.915460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.248 [2024-04-26 15:09:42.915493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.248 [2024-04-26 15:09:42.915524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.248 [2024-04-26 15:09:42.929249] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.248 [2024-04-26 15:09:42.929278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.248 [2024-04-26 15:09:42.929295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.248 [2024-04-26 15:09:42.940442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.248 [2024-04-26 15:09:42.940470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.248 [2024-04-26 15:09:42.940501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.248 [2024-04-26 15:09:42.952939] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.248 [2024-04-26 15:09:42.952968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.248 [2024-04-26 15:09:42.952999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.248 [2024-04-26 15:09:42.963749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.248 [2024-04-26 15:09:42.963776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.248 [2024-04-26 15:09:42.963806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.248 [2024-04-26 15:09:42.977596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.248 [2024-04-26 15:09:42.977624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.248 [2024-04-26 15:09:42.977655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.507 [2024-04-26 15:09:42.991001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.507 [2024-04-26 15:09:42.991069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.507 [2024-04-26 15:09:42.991100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.507 [2024-04-26 15:09:43.003235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.507 [2024-04-26 15:09:43.003265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.507 [2024-04-26 15:09:43.003281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.507 [2024-04-26 15:09:43.020898] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.507 [2024-04-26 15:09:43.020934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.507 [2024-04-26 15:09:43.020953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.507 [2024-04-26 15:09:43.039141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.507 [2024-04-26 15:09:43.039170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.507 [2024-04-26 15:09:43.039187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.507 [2024-04-26 15:09:43.057089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.507 [2024-04-26 15:09:43.057119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.507 [2024-04-26 15:09:43.057136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.507 [2024-04-26 15:09:43.073029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.507 [2024-04-26 15:09:43.073059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.507 [2024-04-26 15:09:43.073077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.507 [2024-04-26 15:09:43.087904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.507 [2024-04-26 15:09:43.087938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.507 [2024-04-26 15:09:43.087964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.507 [2024-04-26 15:09:43.100832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.507 [2024-04-26 15:09:43.100866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.507 [2024-04-26 15:09:43.100886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.507 [2024-04-26 15:09:43.115347] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.507 [2024-04-26 15:09:43.115381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.507 [2024-04-26 15:09:43.115400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.507 [2024-04-26 15:09:43.127672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.507 [2024-04-26 15:09:43.127706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.507 [2024-04-26 15:09:43.127725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.507 [2024-04-26 15:09:43.142923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.507 [2024-04-26 15:09:43.142956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.507 [2024-04-26 15:09:43.142975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.507 [2024-04-26 15:09:43.156948] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.507 [2024-04-26 15:09:43.156982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.507 [2024-04-26 15:09:43.157003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.507 [2024-04-26 15:09:43.169199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.507 [2024-04-26 15:09:43.169228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.507 [2024-04-26 15:09:43.169246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.507 [2024-04-26 15:09:43.183164] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.507 [2024-04-26 15:09:43.183193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.507 [2024-04-26 15:09:43.183209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.507 [2024-04-26 15:09:43.197371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.507 [2024-04-26 15:09:43.197405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.507 [2024-04-26 15:09:43.197425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.507 [2024-04-26 15:09:43.211848] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.507 [2024-04-26 15:09:43.211882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.507 [2024-04-26 15:09:43.211902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.507 [2024-04-26 15:09:43.223338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.507 [2024-04-26 15:09:43.223380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.507 [2024-04-26 15:09:43.223400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.507 [2024-04-26 15:09:43.238264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.507 [2024-04-26 15:09:43.238307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.507 [2024-04-26 15:09:43.238327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.765 [2024-04-26 15:09:43.252155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.766 [2024-04-26 15:09:43.252187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.766 [2024-04-26 15:09:43.252205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.766 [2024-04-26 15:09:43.265876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.766 [2024-04-26 15:09:43.265911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.766 [2024-04-26 15:09:43.265931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.766 [2024-04-26 15:09:43.279869] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.766 [2024-04-26 15:09:43.279903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.766 [2024-04-26 15:09:43.279923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.766 [2024-04-26 15:09:43.292919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.766 [2024-04-26 15:09:43.292954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.766 [2024-04-26 15:09:43.292974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.766 [2024-04-26 15:09:43.308219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.766 [2024-04-26 15:09:43.308248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.766 [2024-04-26 15:09:43.308265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.766 [2024-04-26 15:09:43.320355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.766 [2024-04-26 15:09:43.320389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.766 [2024-04-26 15:09:43.320414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.766 [2024-04-26 15:09:43.335557] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.766 [2024-04-26 15:09:43.335592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.766 [2024-04-26 15:09:43.335611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.766 [2024-04-26 15:09:43.349795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.766 [2024-04-26 15:09:43.349829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.766 [2024-04-26 15:09:43.349848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.766 [2024-04-26 15:09:43.362893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.766 [2024-04-26 15:09:43.362927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.766 [2024-04-26 15:09:43.362946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.766 [2024-04-26 15:09:43.375944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.766 [2024-04-26 15:09:43.375978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.766 [2024-04-26 15:09:43.375997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.766 [2024-04-26 15:09:43.390477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.766 [2024-04-26 15:09:43.390515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.766 [2024-04-26 15:09:43.390535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.766 [2024-04-26 15:09:43.404701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.766 [2024-04-26 15:09:43.404735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.766 [2024-04-26 15:09:43.404755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.766 [2024-04-26 15:09:43.418872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.766 [2024-04-26 15:09:43.418906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.766 [2024-04-26 15:09:43.418925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.766 [2024-04-26 15:09:43.431715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.766 [2024-04-26 15:09:43.431749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.766 [2024-04-26 15:09:43.431769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.766 [2024-04-26 15:09:43.448714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.766 [2024-04-26 15:09:43.448754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.766 [2024-04-26 15:09:43.448775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.766 [2024-04-26 15:09:43.459909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x757130) 00:28:57.766 [2024-04-26 15:09:43.459943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.766 [2024-04-26 15:09:43.459963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.766 00:28:57.766 Latency(us) 00:28:57.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.766 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:57.766 nvme0n1 : 2.00 18962.75 74.07 0.00 0.00 6739.18 3082.62 23398.78 00:28:57.766 =================================================================================================================== 00:28:57.766 Total : 18962.75 74.07 0.00 0.00 6739.18 3082.62 23398.78 00:28:57.766 0 00:28:57.766 15:09:43 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:57.766 15:09:43 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:57.766 15:09:43 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:57.766 | .driver_specific 00:28:57.766 | .nvme_error 00:28:57.766 | .status_code 00:28:57.766 | .command_transient_transport_error' 00:28:57.766 15:09:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:58.024 15:09:43 -- host/digest.sh@71 -- # (( 149 > 0 )) 00:28:58.024 15:09:43 -- host/digest.sh@73 -- # killprocess 3899120 00:28:58.024 15:09:43 -- common/autotest_common.sh@936 -- # '[' -z 3899120 ']' 00:28:58.024 15:09:43 -- common/autotest_common.sh@940 -- # kill -0 3899120 00:28:58.024 15:09:43 -- common/autotest_common.sh@941 -- # uname 00:28:58.024 15:09:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:58.024 15:09:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3899120 00:28:58.281 15:09:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:58.281 15:09:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:58.281 15:09:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3899120' 00:28:58.281 killing process with pid 3899120 00:28:58.281 15:09:43 -- common/autotest_common.sh@955 -- # kill 3899120 00:28:58.281 Received shutdown signal, test time was about 2.000000 seconds 00:28:58.281 00:28:58.281 Latency(us) 00:28:58.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.281 =================================================================================================================== 00:28:58.281 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:58.281 15:09:43 -- common/autotest_common.sh@960 -- # wait 3899120 00:28:58.281 15:09:44 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:58.281 15:09:44 -- host/digest.sh@54 -- # local rw bs qd 00:28:58.281 15:09:44 -- host/digest.sh@56 -- # rw=randread 00:28:58.281 15:09:44 -- host/digest.sh@56 -- # bs=131072 00:28:58.281 15:09:44 -- host/digest.sh@56 -- # qd=16 00:28:58.281 15:09:44 -- host/digest.sh@58 -- # bperfpid=3899530 00:28:58.281 15:09:44 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:58.281 15:09:44 -- host/digest.sh@60 -- # waitforlisten 3899530 /var/tmp/bperf.sock 00:28:58.281 15:09:44 -- common/autotest_common.sh@817 -- # '[' -z 3899530 ']' 00:28:58.281 15:09:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:58.281 15:09:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:58.281 15:09:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:58.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:58.281 15:09:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:58.281 15:09:44 -- common/autotest_common.sh@10 -- # set +x 00:28:58.539 [2024-04-26 15:09:44.050059] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:28:58.539 [2024-04-26 15:09:44.050132] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899530 ] 00:28:58.539 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:58.539 Zero copy mechanism will not be used. 00:28:58.539 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.539 [2024-04-26 15:09:44.080410] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:58.539 [2024-04-26 15:09:44.107731] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.539 [2024-04-26 15:09:44.193795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.797 15:09:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:58.797 15:09:44 -- common/autotest_common.sh@850 -- # return 0 00:28:58.797 15:09:44 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:58.797 15:09:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:59.055 15:09:44 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:59.055 15:09:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:59.055 15:09:44 -- common/autotest_common.sh@10 -- # set +x 00:28:59.055 15:09:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.055 15:09:44 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:59.055 15:09:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:59.622 nvme0n1 00:28:59.622 15:09:45 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:59.622 15:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:59.622 15:09:45 -- common/autotest_common.sh@10 -- # set +x 00:28:59.622 15:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.622 15:09:45 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:59.622 15:09:45 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:59.622 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:59.622 Zero copy mechanism will not be used. 00:28:59.622 Running I/O for 2 seconds... 00:28:59.622 [2024-04-26 15:09:45.262585] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.622 [2024-04-26 15:09:45.262643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.622 [2024-04-26 15:09:45.262667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.622 [2024-04-26 15:09:45.273747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.622 [2024-04-26 15:09:45.273781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.622 [2024-04-26 15:09:45.273801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.623 [2024-04-26 15:09:45.285250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.623 [2024-04-26 15:09:45.285280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.623 [2024-04-26 15:09:45.285308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.623 [2024-04-26 15:09:45.296860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.623 [2024-04-26 15:09:45.296895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.623 [2024-04-26 15:09:45.296915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.623 [2024-04-26 15:09:45.308448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.623 [2024-04-26 15:09:45.308482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.623 [2024-04-26 15:09:45.308501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.623 [2024-04-26 15:09:45.319833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.623 [2024-04-26 15:09:45.319866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.623 [2024-04-26 15:09:45.319885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.623 [2024-04-26 15:09:45.331233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.623 [2024-04-26 15:09:45.331262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.623 [2024-04-26 15:09:45.331294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.623 [2024-04-26 15:09:45.342842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.623 [2024-04-26 15:09:45.342877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.623 [2024-04-26 15:09:45.342897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.623 [2024-04-26 15:09:45.354918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.623 [2024-04-26 15:09:45.354952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.623 [2024-04-26 15:09:45.354971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.366903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.366941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.366961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.378324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.378359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.378379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.390071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.390103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.390134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.401500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.401532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.401550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.413425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.413459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.413478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.424974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.425007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.425035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.436497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.436530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.436549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.448026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.448072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.448088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.459856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.459890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.459909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.471629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.471662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.471681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.483285] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.483313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.483346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.494841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.494874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.494892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.506442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.506475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.506493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.517986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.518026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.518047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.529725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.529758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.529777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.538726] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.538760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.538780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.546382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.546416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.546435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.554158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.554188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.554205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.561686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.561718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.561737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.882 [2024-04-26 15:09:45.569400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.882 [2024-04-26 15:09:45.569439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.882 [2024-04-26 15:09:45.569459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.883 [2024-04-26 15:09:45.577245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.883 [2024-04-26 15:09:45.577272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.883 [2024-04-26 15:09:45.577306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.883 [2024-04-26 15:09:45.585006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.883 [2024-04-26 15:09:45.585048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.883 [2024-04-26 15:09:45.585080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.883 [2024-04-26 15:09:45.593486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.883 [2024-04-26 15:09:45.593526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.883 [2024-04-26 15:09:45.593545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.883 [2024-04-26 15:09:45.603965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.883 [2024-04-26 15:09:45.603999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.883 [2024-04-26 15:09:45.604017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.883 [2024-04-26 15:09:45.614989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:28:59.883 [2024-04-26 15:09:45.615030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.883 [2024-04-26 15:09:45.615065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.142 [2024-04-26 15:09:45.626570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.142 [2024-04-26 15:09:45.626606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.142 [2024-04-26 15:09:45.626626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.142 [2024-04-26 15:09:45.638145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.142 [2024-04-26 15:09:45.638188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.142 [2024-04-26 15:09:45.638205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.142 [2024-04-26 15:09:45.649701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.142 [2024-04-26 15:09:45.649735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.142 [2024-04-26 15:09:45.649753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.142 [2024-04-26 15:09:45.661880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.142 [2024-04-26 15:09:45.661916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.142 [2024-04-26 15:09:45.661935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.142 [2024-04-26 15:09:45.673489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.142 [2024-04-26 15:09:45.673524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.142 [2024-04-26 15:09:45.673543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.142 [2024-04-26 15:09:45.686182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.142 [2024-04-26 15:09:45.686210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.142 [2024-04-26 15:09:45.686241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.142 [2024-04-26 15:09:45.698146] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.142 [2024-04-26 15:09:45.698174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.142 [2024-04-26 15:09:45.698206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.142 [2024-04-26 15:09:45.709770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.142 [2024-04-26 15:09:45.709805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.142 [2024-04-26 15:09:45.709824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.142 [2024-04-26 15:09:45.722184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.142 [2024-04-26 15:09:45.722213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.142 [2024-04-26 15:09:45.722245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.142 [2024-04-26 15:09:45.733852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.142 [2024-04-26 15:09:45.733885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.142 [2024-04-26 15:09:45.733905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.142 [2024-04-26 15:09:45.745683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.142 [2024-04-26 15:09:45.745718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.142 [2024-04-26 15:09:45.745737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.142 [2024-04-26 15:09:45.757716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.142 [2024-04-26 15:09:45.757751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.142 [2024-04-26 15:09:45.757778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.142 [2024-04-26 15:09:45.769230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.142 [2024-04-26 15:09:45.769258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.142 [2024-04-26 15:09:45.769289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.142 [2024-04-26 15:09:45.778762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.142 [2024-04-26 15:09:45.778798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.142 [2024-04-26 15:09:45.778818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.142 [2024-04-26 15:09:45.786353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.142 [2024-04-26 15:09:45.786400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.142 [2024-04-26 15:09:45.786420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.142 [2024-04-26 15:09:45.793645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.142 [2024-04-26 15:09:45.793679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.142 [2024-04-26 15:09:45.793698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.142 [2024-04-26 15:09:45.800812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.142 [2024-04-26 15:09:45.800845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.142 [2024-04-26 15:09:45.800865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.143 [2024-04-26 15:09:45.807887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.143 [2024-04-26 15:09:45.807915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.143 [2024-04-26 15:09:45.807945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.143 [2024-04-26 15:09:45.815371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.143 [2024-04-26 15:09:45.815404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.143 [2024-04-26 15:09:45.815422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.143 [2024-04-26 15:09:45.822887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.143 [2024-04-26 15:09:45.822923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.143 [2024-04-26 15:09:45.822942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.143 [2024-04-26 15:09:45.830182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.143 [2024-04-26 15:09:45.830219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.143 [2024-04-26 15:09:45.830237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.143 [2024-04-26 15:09:45.837576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.143 [2024-04-26 15:09:45.837609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.143 [2024-04-26 15:09:45.837627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.143 [2024-04-26 15:09:45.844832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.143 [2024-04-26 15:09:45.844865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.143 [2024-04-26 15:09:45.844883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.143 [2024-04-26 15:09:45.852154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.143 [2024-04-26 15:09:45.852180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.143 [2024-04-26 15:09:45.852211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.143 [2024-04-26 15:09:45.859481] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.143 [2024-04-26 15:09:45.859515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.143 [2024-04-26 15:09:45.859533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.143 [2024-04-26 15:09:45.866737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.143 [2024-04-26 15:09:45.866770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.143 [2024-04-26 15:09:45.866788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.143 [2024-04-26 15:09:45.873862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.143 [2024-04-26 15:09:45.873895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.143 [2024-04-26 15:09:45.873914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.402 [2024-04-26 15:09:45.881890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.402 [2024-04-26 15:09:45.881928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.402 [2024-04-26 15:09:45.881948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.402 [2024-04-26 15:09:45.889951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.402 [2024-04-26 15:09:45.889987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.402 [2024-04-26 15:09:45.890012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.402 [2024-04-26 15:09:45.897430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.402 [2024-04-26 15:09:45.897465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.402 [2024-04-26 15:09:45.897484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.402 [2024-04-26 15:09:45.905108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.402 [2024-04-26 15:09:45.905140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.402 [2024-04-26 15:09:45.905157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.402 [2024-04-26 15:09:45.912997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.402 [2024-04-26 15:09:45.913039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.402 [2024-04-26 15:09:45.913075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.402 [2024-04-26 15:09:45.921119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.402 [2024-04-26 15:09:45.921150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.402 [2024-04-26 15:09:45.921175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.402 [2024-04-26 15:09:45.929732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.402 [2024-04-26 15:09:45.929765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.402 [2024-04-26 15:09:45.929784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.402 [2024-04-26 15:09:45.938457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.402 [2024-04-26 15:09:45.938492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.402 [2024-04-26 15:09:45.938510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.402 [2024-04-26 15:09:45.946562] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.402 [2024-04-26 15:09:45.946596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.402 [2024-04-26 15:09:45.946615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.402 [2024-04-26 15:09:45.954102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.402 [2024-04-26 15:09:45.954132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.402 [2024-04-26 15:09:45.954163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.402 [2024-04-26 15:09:45.961647] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.402 [2024-04-26 15:09:45.961686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.402 [2024-04-26 15:09:45.961705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.402 [2024-04-26 15:09:45.969226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:45.969254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:45.969285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.403 [2024-04-26 15:09:45.976826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:45.976859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:45.976878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.403 [2024-04-26 15:09:45.984450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:45.984483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:45.984501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.403 [2024-04-26 15:09:45.992460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:45.992493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:45.992513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.403 [2024-04-26 15:09:46.001931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:46.001964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:46.001983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.403 [2024-04-26 15:09:46.010655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:46.010689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:46.010708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.403 [2024-04-26 15:09:46.020139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:46.020168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:46.020200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.403 [2024-04-26 15:09:46.028985] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:46.029033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:46.029061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.403 [2024-04-26 15:09:46.036561] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:46.036589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:46.036620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.403 [2024-04-26 15:09:46.044068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:46.044107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:46.044140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.403 [2024-04-26 15:09:46.052629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:46.052658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:46.052690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.403 [2024-04-26 15:09:46.061200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:46.061230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:46.061262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.403 [2024-04-26 15:09:46.070730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:46.070759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:46.070791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.403 [2024-04-26 15:09:46.080771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:46.080800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:46.080833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.403 [2024-04-26 15:09:46.089760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:46.089789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:46.089820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.403 [2024-04-26 15:09:46.099135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:46.099165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:46.099196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.403 [2024-04-26 15:09:46.107421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:46.107460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:46.107503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.403 [2024-04-26 15:09:46.116320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:46.116364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:46.116380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.403 [2024-04-26 15:09:46.125411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:46.125445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:46.125476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.403 [2024-04-26 15:09:46.133753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.403 [2024-04-26 15:09:46.133791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.403 [2024-04-26 15:09:46.133821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.662 [2024-04-26 15:09:46.142756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.662 [2024-04-26 15:09:46.142786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.662 [2024-04-26 15:09:46.142821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.662 [2024-04-26 15:09:46.151964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.662 [2024-04-26 15:09:46.151993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.662 [2024-04-26 15:09:46.152033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.662 [2024-04-26 15:09:46.161226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.662 [2024-04-26 15:09:46.161255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.662 [2024-04-26 15:09:46.161286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.662 [2024-04-26 15:09:46.170614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.662 [2024-04-26 15:09:46.170649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.662 [2024-04-26 15:09:46.170665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.662 [2024-04-26 15:09:46.180429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.662 [2024-04-26 15:09:46.180463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.662 [2024-04-26 15:09:46.180494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.662 [2024-04-26 15:09:46.190715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.662 [2024-04-26 15:09:46.190757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.190789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.663 [2024-04-26 15:09:46.201141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.663 [2024-04-26 15:09:46.201169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.201201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.663 [2024-04-26 15:09:46.211793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.663 [2024-04-26 15:09:46.211820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.211851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.663 [2024-04-26 15:09:46.222582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.663 [2024-04-26 15:09:46.222608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.222639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.663 [2024-04-26 15:09:46.233359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.663 [2024-04-26 15:09:46.233386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.233425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.663 [2024-04-26 15:09:46.244590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.663 [2024-04-26 15:09:46.244617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.244651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.663 [2024-04-26 15:09:46.255781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.663 [2024-04-26 15:09:46.255809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.255839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.663 [2024-04-26 15:09:46.268077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.663 [2024-04-26 15:09:46.268106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.268138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.663 [2024-04-26 15:09:46.279477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.663 [2024-04-26 15:09:46.279505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.279534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.663 [2024-04-26 15:09:46.291185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.663 [2024-04-26 15:09:46.291215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.291247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.663 [2024-04-26 15:09:46.302635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.663 [2024-04-26 15:09:46.302663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.302694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.663 [2024-04-26 15:09:46.312314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.663 [2024-04-26 15:09:46.312381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.312398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.663 [2024-04-26 15:09:46.323277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.663 [2024-04-26 15:09:46.323311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.323345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.663 [2024-04-26 15:09:46.333910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.663 [2024-04-26 15:09:46.333937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.333967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.663 [2024-04-26 15:09:46.344249] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.663 [2024-04-26 15:09:46.344276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.344313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.663 [2024-04-26 15:09:46.355198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.663 [2024-04-26 15:09:46.355226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.355256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.663 [2024-04-26 15:09:46.366545] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.663 [2024-04-26 15:09:46.366572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.366602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.663 [2024-04-26 15:09:46.377909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.663 [2024-04-26 15:09:46.377937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.377974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.663 [2024-04-26 15:09:46.389516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.663 [2024-04-26 15:09:46.389544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.389574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.663 [2024-04-26 15:09:46.400995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.663 [2024-04-26 15:09:46.401049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.663 [2024-04-26 15:09:46.401083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.922 [2024-04-26 15:09:46.412573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.922 [2024-04-26 15:09:46.412603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.922 [2024-04-26 15:09:46.412635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.922 [2024-04-26 15:09:46.423991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.922 [2024-04-26 15:09:46.424042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.922 [2024-04-26 15:09:46.424059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.922 [2024-04-26 15:09:46.435451] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.922 [2024-04-26 15:09:46.435480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.922 [2024-04-26 15:09:46.435510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.922 [2024-04-26 15:09:46.447097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.922 [2024-04-26 15:09:46.447126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.922 [2024-04-26 15:09:46.447158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.922 [2024-04-26 15:09:46.458549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.922 [2024-04-26 15:09:46.458577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.922 [2024-04-26 15:09:46.458607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.922 [2024-04-26 15:09:46.469978] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.922 [2024-04-26 15:09:46.470006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.922 [2024-04-26 15:09:46.470047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.922 [2024-04-26 15:09:46.481528] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.922 [2024-04-26 15:09:46.481555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.922 [2024-04-26 15:09:46.481586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.922 [2024-04-26 15:09:46.492898] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.922 [2024-04-26 15:09:46.492926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.922 [2024-04-26 15:09:46.492957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.922 [2024-04-26 15:09:46.504485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.922 [2024-04-26 15:09:46.504513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.922 [2024-04-26 15:09:46.504543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.922 [2024-04-26 15:09:46.516388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.922 [2024-04-26 15:09:46.516415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.922 [2024-04-26 15:09:46.516446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.922 [2024-04-26 15:09:46.527814] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.922 [2024-04-26 15:09:46.527841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.922 [2024-04-26 15:09:46.527871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.922 [2024-04-26 15:09:46.539714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.922 [2024-04-26 15:09:46.539743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.923 [2024-04-26 15:09:46.539775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.923 [2024-04-26 15:09:46.549051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.923 [2024-04-26 15:09:46.549080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.923 [2024-04-26 15:09:46.549111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.923 [2024-04-26 15:09:46.556561] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.923 [2024-04-26 15:09:46.556589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.923 [2024-04-26 15:09:46.556619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.923 [2024-04-26 15:09:46.563986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.923 [2024-04-26 15:09:46.564035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.923 [2024-04-26 15:09:46.564086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.923 [2024-04-26 15:09:46.572095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.923 [2024-04-26 15:09:46.572124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.923 [2024-04-26 15:09:46.572155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.923 [2024-04-26 15:09:46.580664] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.923 [2024-04-26 15:09:46.580691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.923 [2024-04-26 15:09:46.580722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.923 [2024-04-26 15:09:46.591139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.923 [2024-04-26 15:09:46.591166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.923 [2024-04-26 15:09:46.591197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.923 [2024-04-26 15:09:46.602140] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.923 [2024-04-26 15:09:46.602169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.923 [2024-04-26 15:09:46.602201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.923 [2024-04-26 15:09:46.613932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.923 [2024-04-26 15:09:46.613961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.923 [2024-04-26 15:09:46.613991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.923 [2024-04-26 15:09:46.625463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.923 [2024-04-26 15:09:46.625508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.923 [2024-04-26 15:09:46.625524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.923 [2024-04-26 15:09:46.637221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.923 [2024-04-26 15:09:46.637250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.923 [2024-04-26 15:09:46.637282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.923 [2024-04-26 15:09:46.648921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.923 [2024-04-26 15:09:46.648948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.923 [2024-04-26 15:09:46.648986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.923 [2024-04-26 15:09:46.660751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:00.923 [2024-04-26 15:09:46.660794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.923 [2024-04-26 15:09:46.660831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.182 [2024-04-26 15:09:46.672371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.182 [2024-04-26 15:09:46.672401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.182 [2024-04-26 15:09:46.672433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.182 [2024-04-26 15:09:46.684322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.182 [2024-04-26 15:09:46.684350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.182 [2024-04-26 15:09:46.684365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.182 [2024-04-26 15:09:46.695791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.182 [2024-04-26 15:09:46.695818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.182 [2024-04-26 15:09:46.695849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.182 [2024-04-26 15:09:46.707223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.182 [2024-04-26 15:09:46.707252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.182 [2024-04-26 15:09:46.707284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.182 [2024-04-26 15:09:46.718756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.182 [2024-04-26 15:09:46.718783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.182 [2024-04-26 15:09:46.718814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.182 [2024-04-26 15:09:46.730424] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.182 [2024-04-26 15:09:46.730452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.182 [2024-04-26 15:09:46.730483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.182 [2024-04-26 15:09:46.741880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.182 [2024-04-26 15:09:46.741909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.182 [2024-04-26 15:09:46.741940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.182 [2024-04-26 15:09:46.753086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.182 [2024-04-26 15:09:46.753113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.182 [2024-04-26 15:09:46.753145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.182 [2024-04-26 15:09:46.764365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.182 [2024-04-26 15:09:46.764392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.182 [2024-04-26 15:09:46.764422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.182 [2024-04-26 15:09:46.775853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.182 [2024-04-26 15:09:46.775882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.182 [2024-04-26 15:09:46.775913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.182 [2024-04-26 15:09:46.788594] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.182 [2024-04-26 15:09:46.788623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.182 [2024-04-26 15:09:46.788654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.182 [2024-04-26 15:09:46.799991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.182 [2024-04-26 15:09:46.800039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.182 [2024-04-26 15:09:46.800058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.182 [2024-04-26 15:09:46.811519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.182 [2024-04-26 15:09:46.811546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.182 [2024-04-26 15:09:46.811576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.182 [2024-04-26 15:09:46.823035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.182 [2024-04-26 15:09:46.823077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.182 [2024-04-26 15:09:46.823094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.182 [2024-04-26 15:09:46.834379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.182 [2024-04-26 15:09:46.834407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.182 [2024-04-26 15:09:46.834437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.182 [2024-04-26 15:09:46.845646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.182 [2024-04-26 15:09:46.845673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.183 [2024-04-26 15:09:46.845703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.183 [2024-04-26 15:09:46.857318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.183 [2024-04-26 15:09:46.857361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.183 [2024-04-26 15:09:46.857385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.183 [2024-04-26 15:09:46.868658] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.183 [2024-04-26 15:09:46.868685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.183 [2024-04-26 15:09:46.868715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.183 [2024-04-26 15:09:46.880231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.183 [2024-04-26 15:09:46.880261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.183 [2024-04-26 15:09:46.880293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.183 [2024-04-26 15:09:46.891886] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.183 [2024-04-26 15:09:46.891913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.183 [2024-04-26 15:09:46.891942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.183 [2024-04-26 15:09:46.903989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.183 [2024-04-26 15:09:46.904041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.183 [2024-04-26 15:09:46.904058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.183 [2024-04-26 15:09:46.915645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.183 [2024-04-26 15:09:46.915673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.183 [2024-04-26 15:09:46.915704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.442 [2024-04-26 15:09:46.927300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.442 [2024-04-26 15:09:46.927348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.442 [2024-04-26 15:09:46.927364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.442 [2024-04-26 15:09:46.939089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.442 [2024-04-26 15:09:46.939119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.442 [2024-04-26 15:09:46.939151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.442 [2024-04-26 15:09:46.947480] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.442 [2024-04-26 15:09:46.947510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.442 [2024-04-26 15:09:46.947541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.442 [2024-04-26 15:09:46.955817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.442 [2024-04-26 15:09:46.955855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.442 [2024-04-26 15:09:46.955887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.442 [2024-04-26 15:09:46.964213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.442 [2024-04-26 15:09:46.964243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.442 [2024-04-26 15:09:46.964276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.442 [2024-04-26 15:09:46.971699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.442 [2024-04-26 15:09:46.971728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.442 [2024-04-26 15:09:46.971760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.442 [2024-04-26 15:09:46.979401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.442 [2024-04-26 15:09:46.979431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.442 [2024-04-26 15:09:46.979463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.442 [2024-04-26 15:09:46.987269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.442 [2024-04-26 15:09:46.987299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.442 [2024-04-26 15:09:46.987331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.442 [2024-04-26 15:09:46.994505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.442 [2024-04-26 15:09:46.994534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.442 [2024-04-26 15:09:46.994565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.442 [2024-04-26 15:09:47.001855] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.442 [2024-04-26 15:09:47.001884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.442 [2024-04-26 15:09:47.001915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.442 [2024-04-26 15:09:47.009427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.442 [2024-04-26 15:09:47.009456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.442 [2024-04-26 15:09:47.009487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.442 [2024-04-26 15:09:47.017292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.442 [2024-04-26 15:09:47.017334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.442 [2024-04-26 15:09:47.017358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.442 [2024-04-26 15:09:47.025506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.442 [2024-04-26 15:09:47.025535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.442 [2024-04-26 15:09:47.025565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.442 [2024-04-26 15:09:47.033509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.442 [2024-04-26 15:09:47.033554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.442 [2024-04-26 15:09:47.033571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.442 [2024-04-26 15:09:47.041436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.442 [2024-04-26 15:09:47.041474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.442 [2024-04-26 15:09:47.041494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.443 [2024-04-26 15:09:47.049821] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.443 [2024-04-26 15:09:47.049856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.443 [2024-04-26 15:09:47.049876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.443 [2024-04-26 15:09:47.057979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.443 [2024-04-26 15:09:47.058015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.443 [2024-04-26 15:09:47.058046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.443 [2024-04-26 15:09:47.065834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.443 [2024-04-26 15:09:47.065870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.443 [2024-04-26 15:09:47.065890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.443 [2024-04-26 15:09:47.073970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.443 [2024-04-26 15:09:47.074006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.443 [2024-04-26 15:09:47.074035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.443 [2024-04-26 15:09:47.082226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.443 [2024-04-26 15:09:47.082257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.443 [2024-04-26 15:09:47.082274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.443 [2024-04-26 15:09:47.090906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.443 [2024-04-26 15:09:47.090949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.443 [2024-04-26 15:09:47.090969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.443 [2024-04-26 15:09:47.100204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.443 [2024-04-26 15:09:47.100234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.443 [2024-04-26 15:09:47.100250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.443 [2024-04-26 15:09:47.110414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.443 [2024-04-26 15:09:47.110449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.443 [2024-04-26 15:09:47.110468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.443 [2024-04-26 15:09:47.119972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.443 [2024-04-26 15:09:47.120007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.443 [2024-04-26 15:09:47.120035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.443 [2024-04-26 15:09:47.129774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.443 [2024-04-26 15:09:47.129819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.443 [2024-04-26 15:09:47.129839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.443 [2024-04-26 15:09:47.137641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.443 [2024-04-26 15:09:47.137675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.443 [2024-04-26 15:09:47.137694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.443 [2024-04-26 15:09:47.146455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.443 [2024-04-26 15:09:47.146489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.443 [2024-04-26 15:09:47.146507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.443 [2024-04-26 15:09:47.156498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.443 [2024-04-26 15:09:47.156533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.443 [2024-04-26 15:09:47.156552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.443 [2024-04-26 15:09:47.166867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.443 [2024-04-26 15:09:47.166902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.443 [2024-04-26 15:09:47.166921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.443 [2024-04-26 15:09:47.177866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.443 [2024-04-26 15:09:47.177903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.443 [2024-04-26 15:09:47.177923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.701 [2024-04-26 15:09:47.188292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.701 [2024-04-26 15:09:47.188323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-04-26 15:09:47.188339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.701 [2024-04-26 15:09:47.199084] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.701 [2024-04-26 15:09:47.199112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-04-26 15:09:47.199128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.701 [2024-04-26 15:09:47.209205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.701 [2024-04-26 15:09:47.209234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.701 [2024-04-26 15:09:47.209249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.702 [2024-04-26 15:09:47.220179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.702 [2024-04-26 15:09:47.220208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.702 [2024-04-26 15:09:47.220224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.702 [2024-04-26 15:09:47.232270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.702 [2024-04-26 15:09:47.232300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.702 [2024-04-26 15:09:47.232316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.702 [2024-04-26 15:09:47.244174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.702 [2024-04-26 15:09:47.244202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.702 [2024-04-26 15:09:47.244217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.702 [2024-04-26 15:09:47.256148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f32ea0) 00:29:01.702 [2024-04-26 15:09:47.256183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.702 [2024-04-26 15:09:47.256202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.702 00:29:01.702 Latency(us) 00:29:01.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.702 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:01.702 nvme0n1 : 2.01 3105.67 388.21 0.00 0.00 5147.38 1019.45 13010.11 00:29:01.702 =================================================================================================================== 00:29:01.702 Total : 3105.67 388.21 0.00 0.00 5147.38 1019.45 13010.11 00:29:01.702 0 00:29:01.702 15:09:47 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:01.702 15:09:47 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:01.702 15:09:47 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:01.702 | .driver_specific 00:29:01.702 | .nvme_error 00:29:01.702 | .status_code 00:29:01.702 | .command_transient_transport_error' 00:29:01.702 15:09:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:01.961 15:09:47 -- host/digest.sh@71 -- # (( 200 > 0 )) 00:29:01.961 15:09:47 -- host/digest.sh@73 -- # killprocess 3899530 00:29:01.961 15:09:47 -- common/autotest_common.sh@936 -- # '[' -z 3899530 ']' 00:29:01.961 15:09:47 -- common/autotest_common.sh@940 -- # kill -0 3899530 00:29:01.961 15:09:47 -- common/autotest_common.sh@941 -- # uname 00:29:01.961 15:09:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:01.961 15:09:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3899530 00:29:01.961 15:09:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:01.961 15:09:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:01.961 15:09:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3899530' 00:29:01.961 killing process with pid 3899530 00:29:01.961 15:09:47 -- common/autotest_common.sh@955 -- # kill 3899530 00:29:01.961 Received shutdown signal, test time was about 2.000000 seconds 00:29:01.961 00:29:01.961 Latency(us) 00:29:01.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.961 =================================================================================================================== 00:29:01.961 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:01.961 15:09:47 -- common/autotest_common.sh@960 -- # wait 3899530 00:29:02.219 15:09:47 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:02.219 15:09:47 -- host/digest.sh@54 -- # local rw bs qd 00:29:02.219 15:09:47 -- host/digest.sh@56 -- # rw=randwrite 00:29:02.219 15:09:47 -- host/digest.sh@56 -- # bs=4096 00:29:02.219 15:09:47 -- host/digest.sh@56 -- # qd=128 00:29:02.219 15:09:47 -- host/digest.sh@58 -- # bperfpid=3899936 00:29:02.219 15:09:47 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:02.219 15:09:47 -- host/digest.sh@60 -- # waitforlisten 3899936 /var/tmp/bperf.sock 00:29:02.219 15:09:47 -- common/autotest_common.sh@817 -- # '[' -z 3899936 ']' 00:29:02.219 15:09:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:02.219 15:09:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:02.219 15:09:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:02.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:02.219 15:09:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:02.219 15:09:47 -- common/autotest_common.sh@10 -- # set +x 00:29:02.219 [2024-04-26 15:09:47.823690] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:29:02.219 [2024-04-26 15:09:47.823763] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899936 ] 00:29:02.219 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.219 [2024-04-26 15:09:47.854572] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:02.219 [2024-04-26 15:09:47.886234] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.478 [2024-04-26 15:09:47.972659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.478 15:09:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:02.478 15:09:48 -- common/autotest_common.sh@850 -- # return 0 00:29:02.478 15:09:48 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:02.478 15:09:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:02.735 15:09:48 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:02.735 15:09:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:02.735 15:09:48 -- common/autotest_common.sh@10 -- # set +x 00:29:02.735 15:09:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:02.735 15:09:48 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:02.735 15:09:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:02.992 nvme0n1 00:29:02.992 15:09:48 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:02.992 15:09:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:02.992 15:09:48 -- common/autotest_common.sh@10 -- # set +x 00:29:02.992 15:09:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:02.992 15:09:48 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:02.992 15:09:48 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:03.250 Running I/O for 2 seconds... 00:29:03.250 [2024-04-26 15:09:48.840609] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190ee190 00:29:03.250 [2024-04-26 15:09:48.841647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.250 [2024-04-26 15:09:48.841684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:03.250 [2024-04-26 15:09:48.852873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190ef270 00:29:03.250 [2024-04-26 15:09:48.853907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.250 [2024-04-26 15:09:48.853934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:03.250 [2024-04-26 15:09:48.864830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f0350 00:29:03.250 [2024-04-26 15:09:48.865828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.250 [2024-04-26 15:09:48.865854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:03.250 [2024-04-26 15:09:48.876552] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f1430 00:29:03.250 [2024-04-26 15:09:48.877543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.250 [2024-04-26 15:09:48.877569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:03.250 [2024-04-26 15:09:48.888294] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f2510 00:29:03.250 [2024-04-26 15:09:48.889298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.250 [2024-04-26 15:09:48.889324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:03.250 [2024-04-26 15:09:48.899843] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f35f0 00:29:03.250 [2024-04-26 15:09:48.900838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.250 [2024-04-26 15:09:48.900864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:03.250 [2024-04-26 15:09:48.911475] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e6b70 00:29:03.250 [2024-04-26 15:09:48.912549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.250 [2024-04-26 15:09:48.912576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:03.250 [2024-04-26 15:09:48.923127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e5a90 00:29:03.250 [2024-04-26 15:09:48.924123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.250 [2024-04-26 15:09:48.924149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:03.250 [2024-04-26 15:09:48.934668] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e49b0 00:29:03.250 [2024-04-26 15:09:48.935704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.250 [2024-04-26 15:09:48.935730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:03.250 [2024-04-26 15:09:48.946178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190dece0 00:29:03.250 [2024-04-26 15:09:48.947182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.250 [2024-04-26 15:09:48.947210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:03.250 [2024-04-26 15:09:48.957850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190dfdc0 00:29:03.250 [2024-04-26 15:09:48.958840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.250 [2024-04-26 15:09:48.958866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:03.250 [2024-04-26 15:09:48.969452] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e0ea0 00:29:03.250 [2024-04-26 15:09:48.970471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.250 [2024-04-26 15:09:48.970496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:03.250 [2024-04-26 15:09:48.981174] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e1f80 00:29:03.250 [2024-04-26 15:09:48.982130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.250 [2024-04-26 15:09:48.982156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:03.510 [2024-04-26 15:09:48.992614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fa7d8 00:29:03.510 [2024-04-26 15:09:48.993664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.510 [2024-04-26 15:09:48.993709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:03.510 [2024-04-26 15:09:49.005640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fc128 00:29:03.510 [2024-04-26 15:09:49.006783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.510 [2024-04-26 15:09:49.006810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:03.510 [2024-04-26 15:09:49.016661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190df118 00:29:03.510 [2024-04-26 15:09:49.017819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.510 [2024-04-26 15:09:49.017851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:03.510 [2024-04-26 15:09:49.030345] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e49b0 00:29:03.510 [2024-04-26 15:09:49.031786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.510 [2024-04-26 15:09:49.031818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:03.510 [2024-04-26 15:09:49.044080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e6fa8 00:29:03.510 [2024-04-26 15:09:49.045678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.510 [2024-04-26 15:09:49.045709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:03.510 [2024-04-26 15:09:49.057758] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190ddc00 00:29:03.510 [2024-04-26 15:09:49.059550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.510 [2024-04-26 15:09:49.059583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:03.510 [2024-04-26 15:09:49.071370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190ec840 00:29:03.510 [2024-04-26 15:09:49.073356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.510 [2024-04-26 15:09:49.073389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:03.510 [2024-04-26 15:09:49.085291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190df550 00:29:03.510 [2024-04-26 15:09:49.087432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.510 [2024-04-26 15:09:49.087464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:03.510 [2024-04-26 15:09:49.094569] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e88f8 00:29:03.510 [2024-04-26 15:09:49.095426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.510 [2024-04-26 15:09:49.095457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:03.510 [2024-04-26 15:09:49.107937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f7da8 00:29:03.510 [2024-04-26 15:09:49.108933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.510 [2024-04-26 15:09:49.108965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:03.510 [2024-04-26 15:09:49.121249] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190ed4e8 00:29:03.510 [2024-04-26 15:09:49.122118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.510 [2024-04-26 15:09:49.122144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:03.510 [2024-04-26 15:09:49.134253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190ec408 00:29:03.510 [2024-04-26 15:09:49.135134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.510 [2024-04-26 15:09:49.135162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:03.510 [2024-04-26 15:09:49.147186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190eb328 00:29:03.510 [2024-04-26 15:09:49.148115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.511 [2024-04-26 15:09:49.148141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:03.511 [2024-04-26 15:09:49.160160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f4b08 00:29:03.511 [2024-04-26 15:09:49.161041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.511 [2024-04-26 15:09:49.161074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:03.511 [2024-04-26 15:09:49.173119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190ff3c8 00:29:03.511 [2024-04-26 15:09:49.173991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.511 [2024-04-26 15:09:49.174029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:03.511 [2024-04-26 15:09:49.186104] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f92c0 00:29:03.511 [2024-04-26 15:09:49.186977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.511 [2024-04-26 15:09:49.187009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:03.511 [2024-04-26 15:09:49.199247] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f7100 00:29:03.511 [2024-04-26 15:09:49.200110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.511 [2024-04-26 15:09:49.200138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:03.511 [2024-04-26 15:09:49.213755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f6020 00:29:03.511 [2024-04-26 15:09:49.215293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.511 [2024-04-26 15:09:49.215335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:03.511 [2024-04-26 15:09:49.227438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190dfdc0 00:29:03.511 [2024-04-26 15:09:49.229228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.511 [2024-04-26 15:09:49.229262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:03.511 [2024-04-26 15:09:49.241146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e6fa8 00:29:03.511 [2024-04-26 15:09:49.243132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.511 [2024-04-26 15:09:49.243164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.255034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e88f8 00:29:03.770 [2024-04-26 15:09:49.257131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.257161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.264253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fef90 00:29:03.770 [2024-04-26 15:09:49.265115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.265143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.277813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f1430 00:29:03.770 [2024-04-26 15:09:49.278815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.278848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.291373] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e23b8 00:29:03.770 [2024-04-26 15:09:49.292562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.292595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.304885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f0bc0 00:29:03.770 [2024-04-26 15:09:49.306273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.306300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.318432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e2c28 00:29:03.770 [2024-04-26 15:09:49.320017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.320064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.329693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e6738 00:29:03.770 [2024-04-26 15:09:49.330419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.330450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.343474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e5ec8 00:29:03.770 [2024-04-26 15:09:49.344291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.344331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.357122] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f4f40 00:29:03.770 [2024-04-26 15:09:49.358110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.358138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.369528] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f2d80 00:29:03.770 [2024-04-26 15:09:49.371238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.371265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.380752] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190ee5c8 00:29:03.770 [2024-04-26 15:09:49.381598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.381629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.394271] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190de038 00:29:03.770 [2024-04-26 15:09:49.395267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.395293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.407905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e5ec8 00:29:03.770 [2024-04-26 15:09:49.409197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.409222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.421201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f8618 00:29:03.770 [2024-04-26 15:09:49.422402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.422433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.434632] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f1868 00:29:03.770 [2024-04-26 15:09:49.435874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.435906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.446907] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f1ca0 00:29:03.770 [2024-04-26 15:09:49.447999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.448038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.459117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f57b0 00:29:03.770 [2024-04-26 15:09:49.459963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.459994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.472528] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190ecc78 00:29:03.770 [2024-04-26 15:09:49.473400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.473432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.487404] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e7c50 00:29:03.770 [2024-04-26 15:09:49.488489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.488520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:03.770 [2024-04-26 15:09:49.499606] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190eb328 00:29:03.770 [2024-04-26 15:09:49.501372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.770 [2024-04-26 15:09:49.501402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.513309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f1ca0 00:29:04.030 [2024-04-26 15:09:49.514739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.514774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.528414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e0630 00:29:04.030 [2024-04-26 15:09:49.530382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.530414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.541226] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e5ec8 00:29:04.030 [2024-04-26 15:09:49.543186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.543219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.553340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f6020 00:29:04.030 [2024-04-26 15:09:49.554794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.554825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.565063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f7100 00:29:04.030 [2024-04-26 15:09:49.566946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.566982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.576562] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190ee5c8 00:29:04.030 [2024-04-26 15:09:49.577432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.577463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.590218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e4de8 00:29:04.030 [2024-04-26 15:09:49.591228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.591253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.603754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f57b0 00:29:04.030 [2024-04-26 15:09:49.604967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.605006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.618258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fda78 00:29:04.030 [2024-04-26 15:09:49.619743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.619773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.630718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f6458 00:29:04.030 [2024-04-26 15:09:49.632209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.632240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.644696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e3060 00:29:04.030 [2024-04-26 15:09:49.646266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.646300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.656784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190edd58 00:29:04.030 [2024-04-26 15:09:49.657906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.657936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.670189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f5be8 00:29:04.030 [2024-04-26 15:09:49.671091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.671117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.683449] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e4140 00:29:04.030 [2024-04-26 15:09:49.684684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.684715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.696846] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190df118 00:29:04.030 [2024-04-26 15:09:49.698284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.698309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.710005] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e5658 00:29:04.030 [2024-04-26 15:09:49.711425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.711456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.723497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e88f8 00:29:04.030 [2024-04-26 15:09:49.725117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.725149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.734267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e23b8 00:29:04.030 [2024-04-26 15:09:49.735132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.735158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.747264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f0788 00:29:04.030 [2024-04-26 15:09:49.748142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.748166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:04.030 [2024-04-26 15:09:49.761986] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f6020 00:29:04.030 [2024-04-26 15:09:49.763580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.030 [2024-04-26 15:09:49.763611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:49.774385] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190ebfd0 00:29:04.289 [2024-04-26 15:09:49.775457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:49.775492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:49.787606] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f35f0 00:29:04.289 [2024-04-26 15:09:49.788517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:49.788549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:49.802614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e38d0 00:29:04.289 [2024-04-26 15:09:49.804531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:49.804563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:49.815844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190df988 00:29:04.289 [2024-04-26 15:09:49.817786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:49.817818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:49.826731] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f96f8 00:29:04.289 [2024-04-26 15:09:49.827707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:49.827738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:49.841644] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e49b0 00:29:04.289 [2024-04-26 15:09:49.843406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:49.843438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:49.852753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190ff3c8 00:29:04.289 [2024-04-26 15:09:49.853828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:49.853860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:49.864911] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e3d08 00:29:04.289 [2024-04-26 15:09:49.865973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:49.866004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:49.878704] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e6300 00:29:04.289 [2024-04-26 15:09:49.879954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:49.879985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:49.892645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190eaef0 00:29:04.289 [2024-04-26 15:09:49.894066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:49.894107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:49.905009] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190ef6a8 00:29:04.289 [2024-04-26 15:09:49.905899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:49.905936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:49.918081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f57b0 00:29:04.289 [2024-04-26 15:09:49.918957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:49.918988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:49.932806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190edd58 00:29:04.289 [2024-04-26 15:09:49.934385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:49.934420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:49.944947] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e3d08 00:29:04.289 [2024-04-26 15:09:49.946036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:49.946081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:49.957857] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fe720 00:29:04.289 [2024-04-26 15:09:49.958966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:49.958997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:49.970979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e0ea0 00:29:04.289 [2024-04-26 15:09:49.972050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:49.972094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:49.984055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f2510 00:29:04.289 [2024-04-26 15:09:49.985103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:49.985128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:49.997072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fb048 00:29:04.289 [2024-04-26 15:09:49.998127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:49.998153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:50.010170] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fc128 00:29:04.289 [2024-04-26 15:09:50.011211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:50.011246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:04.289 [2024-04-26 15:09:50.023599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190ec840 00:29:04.289 [2024-04-26 15:09:50.024555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.289 [2024-04-26 15:09:50.024587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.037476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e9168 00:29:04.550 [2024-04-26 15:09:50.038586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.038641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.052727] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f0788 00:29:04.550 [2024-04-26 15:09:50.054691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.054725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.066842] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e1710 00:29:04.550 [2024-04-26 15:09:50.068970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.069002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.076437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f0ff8 00:29:04.550 [2024-04-26 15:09:50.077338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.077365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.090425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e8088 00:29:04.550 [2024-04-26 15:09:50.091334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.091376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.103998] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190ed0b0 00:29:04.550 [2024-04-26 15:09:50.105009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.105044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.116290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e9e10 00:29:04.550 [2024-04-26 15:09:50.117329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.117356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.128406] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fa3a0 00:29:04.550 [2024-04-26 15:09:50.129425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.129451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.140682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e95a0 00:29:04.550 [2024-04-26 15:09:50.141707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.141733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.152859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e01f8 00:29:04.550 [2024-04-26 15:09:50.153918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.153944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.165495] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f2510 00:29:04.550 [2024-04-26 15:09:50.166648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.166673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.177800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fb048 00:29:04.550 [2024-04-26 15:09:50.178977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.179017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.189864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fc128 00:29:04.550 [2024-04-26 15:09:50.191056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.191082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.201117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e7c50 00:29:04.550 [2024-04-26 15:09:50.202264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.202291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.214535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f8a50 00:29:04.550 [2024-04-26 15:09:50.215877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.215905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.227306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e3060 00:29:04.550 [2024-04-26 15:09:50.228797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.228823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.238659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190ec840 00:29:04.550 [2024-04-26 15:09:50.239950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.239992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.250287] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f0350 00:29:04.550 [2024-04-26 15:09:50.251624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.251650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.263695] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e9168 00:29:04.550 [2024-04-26 15:09:50.265159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.265187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.274908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fa7d8 00:29:04.550 [2024-04-26 15:09:50.276355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.550 [2024-04-26 15:09:50.276396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.550 [2024-04-26 15:09:50.287551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f4298 00:29:04.551 [2024-04-26 15:09:50.289063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.551 [2024-04-26 15:09:50.289094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:04.816 [2024-04-26 15:09:50.300121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e9168 00:29:04.816 [2024-04-26 15:09:50.301139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.816 [2024-04-26 15:09:50.301170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:04.816 [2024-04-26 15:09:50.313040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190eff18 00:29:04.816 [2024-04-26 15:09:50.314419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.816 [2024-04-26 15:09:50.314447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:04.816 [2024-04-26 15:09:50.325188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f2948 00:29:04.816 [2024-04-26 15:09:50.326498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.816 [2024-04-26 15:09:50.326524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:04.816 [2024-04-26 15:09:50.337266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190ed4e8 00:29:04.817 [2024-04-26 15:09:50.338612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.817 [2024-04-26 15:09:50.338638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:04.817 [2024-04-26 15:09:50.349262] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190ea680 00:29:04.817 [2024-04-26 15:09:50.350544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.817 [2024-04-26 15:09:50.350576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:04.817 [2024-04-26 15:09:50.361337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f8e88 00:29:04.817 [2024-04-26 15:09:50.362567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.817 [2024-04-26 15:09:50.362594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:04.817 [2024-04-26 15:09:50.373063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f57b0 00:29:04.817 [2024-04-26 15:09:50.374226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.817 [2024-04-26 15:09:50.374253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:04.817 [2024-04-26 15:09:50.385556] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fdeb0 00:29:04.817 [2024-04-26 15:09:50.386698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.817 [2024-04-26 15:09:50.386725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:04.817 [2024-04-26 15:09:50.397132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190dfdc0 00:29:04.817 [2024-04-26 15:09:50.398873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.817 [2024-04-26 15:09:50.398899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:04.817 [2024-04-26 15:09:50.407728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e12d8 00:29:04.817 [2024-04-26 15:09:50.408676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.817 [2024-04-26 15:09:50.408704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:04.817 [2024-04-26 15:09:50.420531] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fc560 00:29:04.817 [2024-04-26 15:09:50.421677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.817 [2024-04-26 15:09:50.421706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:04.817 [2024-04-26 15:09:50.433464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e3060 00:29:04.817 [2024-04-26 15:09:50.434730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.817 [2024-04-26 15:09:50.434759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:04.817 [2024-04-26 15:09:50.446284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fc998 00:29:04.817 [2024-04-26 15:09:50.447667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.817 [2024-04-26 15:09:50.447693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:04.817 [2024-04-26 15:09:50.459100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190df988 00:29:04.817 [2024-04-26 15:09:50.460615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.817 [2024-04-26 15:09:50.460642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:04.817 [2024-04-26 15:09:50.471745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fc560 00:29:04.817 [2024-04-26 15:09:50.473520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.817 [2024-04-26 15:09:50.473561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:04.817 [2024-04-26 15:09:50.484336] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e3498 00:29:04.817 [2024-04-26 15:09:50.486333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.817 [2024-04-26 15:09:50.486360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:04.817 [2024-04-26 15:09:50.497251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190dfdc0 00:29:04.817 [2024-04-26 15:09:50.499312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.817 [2024-04-26 15:09:50.499338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:04.817 [2024-04-26 15:09:50.505962] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f31b8 00:29:04.817 [2024-04-26 15:09:50.506892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.817 [2024-04-26 15:09:50.506918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:04.817 [2024-04-26 15:09:50.520941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e6b70 00:29:04.817 [2024-04-26 15:09:50.522725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.817 [2024-04-26 15:09:50.522767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:04.817 [2024-04-26 15:09:50.532793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f31b8 00:29:04.817 [2024-04-26 15:09:50.534579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.817 [2024-04-26 15:09:50.534607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:04.817 [2024-04-26 15:09:50.545561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e3060 00:29:04.817 [2024-04-26 15:09:50.547498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.817 [2024-04-26 15:09:50.547526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:05.077 [2024-04-26 15:09:50.558747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fd208 00:29:05.077 [2024-04-26 15:09:50.560914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.077 [2024-04-26 15:09:50.560951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:05.077 [2024-04-26 15:09:50.567368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e5ec8 00:29:05.077 [2024-04-26 15:09:50.568326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.077 [2024-04-26 15:09:50.568379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:05.077 [2024-04-26 15:09:50.579091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fbcf0 00:29:05.077 [2024-04-26 15:09:50.580045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.077 [2024-04-26 15:09:50.580085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:05.077 [2024-04-26 15:09:50.591855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190df550 00:29:05.077 [2024-04-26 15:09:50.592938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.077 [2024-04-26 15:09:50.592965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:05.077 [2024-04-26 15:09:50.604684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e27f0 00:29:05.077 [2024-04-26 15:09:50.605941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.077 [2024-04-26 15:09:50.605967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:05.077 [2024-04-26 15:09:50.617198] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e49b0 00:29:05.077 [2024-04-26 15:09:50.618486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.077 [2024-04-26 15:09:50.618514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:05.077 [2024-04-26 15:09:50.629140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e8d30 00:29:05.077 [2024-04-26 15:09:50.630368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.077 [2024-04-26 15:09:50.630393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:05.077 [2024-04-26 15:09:50.641344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e27f0 00:29:05.077 [2024-04-26 15:09:50.642803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.077 [2024-04-26 15:09:50.642830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:05.077 [2024-04-26 15:09:50.653823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f8e88 00:29:05.077 [2024-04-26 15:09:50.655459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.078 [2024-04-26 15:09:50.655485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.078 [2024-04-26 15:09:50.664834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fb480 00:29:05.078 [2024-04-26 15:09:50.665933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.078 [2024-04-26 15:09:50.665964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:05.078 [2024-04-26 15:09:50.676392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e12d8 00:29:05.078 [2024-04-26 15:09:50.677480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.078 [2024-04-26 15:09:50.677506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:05.078 [2024-04-26 15:09:50.688015] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190dece0 00:29:05.078 [2024-04-26 15:09:50.689101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.078 [2024-04-26 15:09:50.689128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:05.078 [2024-04-26 15:09:50.699612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e73e0 00:29:05.078 [2024-04-26 15:09:50.700653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.078 [2024-04-26 15:09:50.700679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:05.078 [2024-04-26 15:09:50.711295] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e4de8 00:29:05.078 [2024-04-26 15:09:50.712361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.078 [2024-04-26 15:09:50.712386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:05.078 [2024-04-26 15:09:50.722814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e3060 00:29:05.078 [2024-04-26 15:09:50.723894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.078 [2024-04-26 15:09:50.723920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:05.078 [2024-04-26 15:09:50.734456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fda78 00:29:05.078 [2024-04-26 15:09:50.735573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.078 [2024-04-26 15:09:50.735599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:05.078 [2024-04-26 15:09:50.746051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fe720 00:29:05.078 [2024-04-26 15:09:50.747114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.078 [2024-04-26 15:09:50.747141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:05.078 [2024-04-26 15:09:50.759326] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f8a50 00:29:05.078 [2024-04-26 15:09:50.760962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.078 [2024-04-26 15:09:50.760990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:05.078 [2024-04-26 15:09:50.770225] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190e5220 00:29:05.078 [2024-04-26 15:09:50.771471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.078 [2024-04-26 15:09:50.771497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.078 [2024-04-26 15:09:50.780951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fda78 00:29:05.078 [2024-04-26 15:09:50.782192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.078 [2024-04-26 15:09:50.782218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:05.078 [2024-04-26 15:09:50.793896] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f6cc8 00:29:05.078 [2024-04-26 15:09:50.795326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.078 [2024-04-26 15:09:50.795352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:05.078 [2024-04-26 15:09:50.805511] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190f2d80 00:29:05.078 [2024-04-26 15:09:50.806902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.078 [2024-04-26 15:09:50.806928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:05.338 [2024-04-26 15:09:50.817439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190eaef0 00:29:05.338 [2024-04-26 15:09:50.818919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.338 [2024-04-26 15:09:50.818947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:05.338 [2024-04-26 15:09:50.829334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56020) with pdu=0x2000190fc998 00:29:05.338 [2024-04-26 15:09:50.830749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.338 [2024-04-26 15:09:50.830777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:05.338 00:29:05.338 Latency(us) 00:29:05.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.338 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:05.338 nvme0n1 : 2.01 20312.21 79.34 0.00 0.00 6291.87 2742.80 16214.09 00:29:05.338 =================================================================================================================== 00:29:05.338 Total : 20312.21 79.34 0.00 0.00 6291.87 2742.80 16214.09 00:29:05.338 0 00:29:05.338 15:09:50 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:05.338 15:09:50 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:05.338 15:09:50 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:05.338 | .driver_specific 00:29:05.338 | .nvme_error 00:29:05.338 | .status_code 00:29:05.338 | .command_transient_transport_error' 00:29:05.338 15:09:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:05.597 15:09:51 -- host/digest.sh@71 -- # (( 159 > 0 )) 00:29:05.597 15:09:51 -- host/digest.sh@73 -- # killprocess 3899936 00:29:05.597 15:09:51 -- common/autotest_common.sh@936 -- # '[' -z 3899936 ']' 00:29:05.597 15:09:51 -- common/autotest_common.sh@940 -- # kill -0 3899936 00:29:05.597 15:09:51 -- common/autotest_common.sh@941 -- # uname 00:29:05.597 15:09:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:05.597 15:09:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3899936 00:29:05.597 15:09:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:05.597 15:09:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:05.597 15:09:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3899936' 00:29:05.597 killing process with pid 3899936 00:29:05.597 15:09:51 -- common/autotest_common.sh@955 -- # kill 3899936 00:29:05.597 Received shutdown signal, test time was about 2.000000 seconds 00:29:05.597 00:29:05.597 Latency(us) 00:29:05.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.597 =================================================================================================================== 00:29:05.597 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:05.597 15:09:51 -- common/autotest_common.sh@960 -- # wait 3899936 00:29:05.855 15:09:51 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:05.855 15:09:51 -- host/digest.sh@54 -- # local rw bs qd 00:29:05.855 15:09:51 -- host/digest.sh@56 -- # rw=randwrite 00:29:05.855 15:09:51 -- host/digest.sh@56 -- # bs=131072 00:29:05.855 15:09:51 -- host/digest.sh@56 -- # qd=16 00:29:05.855 15:09:51 -- host/digest.sh@58 -- # bperfpid=3900341 00:29:05.855 15:09:51 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:05.855 15:09:51 -- host/digest.sh@60 -- # waitforlisten 3900341 /var/tmp/bperf.sock 00:29:05.856 15:09:51 -- common/autotest_common.sh@817 -- # '[' -z 3900341 ']' 00:29:05.856 15:09:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:05.856 15:09:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:05.856 15:09:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:05.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:05.856 15:09:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:05.856 15:09:51 -- common/autotest_common.sh@10 -- # set +x 00:29:05.856 [2024-04-26 15:09:51.400818] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:29:05.856 [2024-04-26 15:09:51.400889] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900341 ] 00:29:05.856 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:05.856 Zero copy mechanism will not be used. 00:29:05.856 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.856 [2024-04-26 15:09:51.431202] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:05.856 [2024-04-26 15:09:51.462682] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.856 [2024-04-26 15:09:51.549976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.114 15:09:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:06.114 15:09:51 -- common/autotest_common.sh@850 -- # return 0 00:29:06.114 15:09:51 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:06.114 15:09:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:06.372 15:09:51 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:06.372 15:09:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:06.372 15:09:51 -- common/autotest_common.sh@10 -- # set +x 00:29:06.372 15:09:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:06.372 15:09:51 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:06.372 15:09:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:06.629 nvme0n1 00:29:06.629 15:09:52 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:06.629 15:09:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:06.629 15:09:52 -- common/autotest_common.sh@10 -- # set +x 00:29:06.629 15:09:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:06.629 15:09:52 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:06.629 15:09:52 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:06.888 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:06.888 Zero copy mechanism will not be used. 00:29:06.888 Running I/O for 2 seconds... 00:29:06.888 [2024-04-26 15:09:52.419965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.888 [2024-04-26 15:09:52.420447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.888 [2024-04-26 15:09:52.420484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.888 [2024-04-26 15:09:52.428581] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.888 [2024-04-26 15:09:52.428908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.888 [2024-04-26 15:09:52.428937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.888 [2024-04-26 15:09:52.439319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.888 [2024-04-26 15:09:52.439685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.888 [2024-04-26 15:09:52.439714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.888 [2024-04-26 15:09:52.449720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.888 [2024-04-26 15:09:52.450069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.888 [2024-04-26 15:09:52.450099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.888 [2024-04-26 15:09:52.460513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.889 [2024-04-26 15:09:52.460882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.889 [2024-04-26 15:09:52.460910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.889 [2024-04-26 15:09:52.471682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.889 [2024-04-26 15:09:52.472029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.889 [2024-04-26 15:09:52.472059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.889 [2024-04-26 15:09:52.481934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.889 [2024-04-26 15:09:52.482271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.889 [2024-04-26 15:09:52.482300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.889 [2024-04-26 15:09:52.492190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.889 [2024-04-26 15:09:52.492343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.889 [2024-04-26 15:09:52.492371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.889 [2024-04-26 15:09:52.502876] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.889 [2024-04-26 15:09:52.503240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.889 [2024-04-26 15:09:52.503269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.889 [2024-04-26 15:09:52.512862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.889 [2024-04-26 15:09:52.513227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.889 [2024-04-26 15:09:52.513256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.889 [2024-04-26 15:09:52.522530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.889 [2024-04-26 15:09:52.522871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.889 [2024-04-26 15:09:52.522909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.889 [2024-04-26 15:09:52.531830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.889 [2024-04-26 15:09:52.532178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.889 [2024-04-26 15:09:52.532207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.889 [2024-04-26 15:09:52.541489] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.889 [2024-04-26 15:09:52.541823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.889 [2024-04-26 15:09:52.541850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.889 [2024-04-26 15:09:52.551037] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.889 [2024-04-26 15:09:52.551374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.889 [2024-04-26 15:09:52.551401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.889 [2024-04-26 15:09:52.559418] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.889 [2024-04-26 15:09:52.559827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.889 [2024-04-26 15:09:52.559854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.889 [2024-04-26 15:09:52.568613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.889 [2024-04-26 15:09:52.569027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.889 [2024-04-26 15:09:52.569056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.889 [2024-04-26 15:09:52.576749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.889 [2024-04-26 15:09:52.577193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.889 [2024-04-26 15:09:52.577220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.889 [2024-04-26 15:09:52.585129] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.889 [2024-04-26 15:09:52.585543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.889 [2024-04-26 15:09:52.585584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.889 [2024-04-26 15:09:52.593612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.889 [2024-04-26 15:09:52.593923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.889 [2024-04-26 15:09:52.593950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.889 [2024-04-26 15:09:52.600883] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.889 [2024-04-26 15:09:52.601229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.889 [2024-04-26 15:09:52.601259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.889 [2024-04-26 15:09:52.607220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.889 [2024-04-26 15:09:52.607576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.889 [2024-04-26 15:09:52.607603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.889 [2024-04-26 15:09:52.614432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.889 [2024-04-26 15:09:52.614749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.889 [2024-04-26 15:09:52.614777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.889 [2024-04-26 15:09:52.620845] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:06.889 [2024-04-26 15:09:52.621176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.889 [2024-04-26 15:09:52.621205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.628355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.628678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.628709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.635283] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.635593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.635631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.643075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.643416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.643451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.650406] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.650751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.650779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.656885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.657238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.657267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.663353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.663656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.663685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.669573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.669884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.669911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.676887] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.677247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.677277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.684352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.684678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.684706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.692537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.692841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.692868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.699510] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.699845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.699872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.707052] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.707385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.707412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.714361] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.714693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.714720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.721159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.721494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.721521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.727893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.728237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.728265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.735714] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.736038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.736073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.743983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.744314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.744342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.752481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.752774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.752801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.760256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.760593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.760621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.767312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.767640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.767667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.774663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.775072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.775115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.782770] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.783097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.783132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.790614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.790939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.790965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.798374] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.798768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.798795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.805872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.806205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.806235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.813937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.150 [2024-04-26 15:09:52.814271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.150 [2024-04-26 15:09:52.814300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.150 [2024-04-26 15:09:52.821525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.151 [2024-04-26 15:09:52.821830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-04-26 15:09:52.821857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.151 [2024-04-26 15:09:52.828215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.151 [2024-04-26 15:09:52.828531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-04-26 15:09:52.828567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.151 [2024-04-26 15:09:52.834342] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.151 [2024-04-26 15:09:52.834633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-04-26 15:09:52.834660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.151 [2024-04-26 15:09:52.840435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.151 [2024-04-26 15:09:52.840730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-04-26 15:09:52.840757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.151 [2024-04-26 15:09:52.847050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.151 [2024-04-26 15:09:52.847369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-04-26 15:09:52.847395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.151 [2024-04-26 15:09:52.853542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.151 [2024-04-26 15:09:52.853961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-04-26 15:09:52.853987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.151 [2024-04-26 15:09:52.860240] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.151 [2024-04-26 15:09:52.860633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-04-26 15:09:52.860660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.151 [2024-04-26 15:09:52.867058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.151 [2024-04-26 15:09:52.867415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-04-26 15:09:52.867442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.151 [2024-04-26 15:09:52.873551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.151 [2024-04-26 15:09:52.873936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-04-26 15:09:52.873986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.151 [2024-04-26 15:09:52.880174] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.151 [2024-04-26 15:09:52.880518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-04-26 15:09:52.880545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.151 [2024-04-26 15:09:52.886797] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.151 [2024-04-26 15:09:52.887163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.151 [2024-04-26 15:09:52.887195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:52.893323] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:52.893660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:52.893700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:52.899513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:52.899815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:52.899843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:52.907196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:52.907539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:52.907567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:52.914114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:52.914460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:52.914487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:52.921407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:52.921782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:52.921823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:52.928196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:52.928578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:52.928606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:52.936171] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:52.936579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:52.936606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:52.943510] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:52.943835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:52.943862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:52.950466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:52.950832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:52.950859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:52.956809] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:52.957229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:52.957256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:52.964929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:52.965269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:52.965297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:52.972237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:52.972556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:52.972583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:52.978803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:52.979133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:52.979160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:52.985460] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:52.985761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:52.985787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:52.992197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:52.992529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:52.992556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:52.999615] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:53.000013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:53.000059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:53.007350] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:53.007731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:53.007781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:53.014155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:53.014559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:53.014599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:53.021107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:53.021521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:53.021547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:53.028238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:53.028625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:53.028652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:53.035228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:53.035622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:53.035648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:53.043331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:53.043719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:53.043768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:53.050878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:53.051206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:53.051233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:53.057627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:53.057941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:53.057968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:53.063823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.411 [2024-04-26 15:09:53.063933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.411 [2024-04-26 15:09:53.063958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.411 [2024-04-26 15:09:53.071640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.412 [2024-04-26 15:09:53.071942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.412 [2024-04-26 15:09:53.071969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.412 [2024-04-26 15:09:53.078845] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.412 [2024-04-26 15:09:53.079178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.412 [2024-04-26 15:09:53.079205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.412 [2024-04-26 15:09:53.085549] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.412 [2024-04-26 15:09:53.085939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.412 [2024-04-26 15:09:53.085980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.412 [2024-04-26 15:09:53.093276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.412 [2024-04-26 15:09:53.093417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.412 [2024-04-26 15:09:53.093448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.412 [2024-04-26 15:09:53.100965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.412 [2024-04-26 15:09:53.101281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.412 [2024-04-26 15:09:53.101327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.412 [2024-04-26 15:09:53.108164] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.412 [2024-04-26 15:09:53.108529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.412 [2024-04-26 15:09:53.108561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.412 [2024-04-26 15:09:53.116790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.412 [2024-04-26 15:09:53.117137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.412 [2024-04-26 15:09:53.117164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.412 [2024-04-26 15:09:53.125130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.412 [2024-04-26 15:09:53.125370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.412 [2024-04-26 15:09:53.125400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.412 [2024-04-26 15:09:53.135106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.412 [2024-04-26 15:09:53.135538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.412 [2024-04-26 15:09:53.135579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.412 [2024-04-26 15:09:53.144687] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.412 [2024-04-26 15:09:53.145067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.412 [2024-04-26 15:09:53.145094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.671 [2024-04-26 15:09:53.152823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.671 [2024-04-26 15:09:53.153185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-04-26 15:09:53.153214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.671 [2024-04-26 15:09:53.160618] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.671 [2024-04-26 15:09:53.160958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-04-26 15:09:53.160992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.671 [2024-04-26 15:09:53.167750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.671 [2024-04-26 15:09:53.168106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-04-26 15:09:53.168135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.671 [2024-04-26 15:09:53.175048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.671 [2024-04-26 15:09:53.175398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-04-26 15:09:53.175430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.671 [2024-04-26 15:09:53.182859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.671 [2024-04-26 15:09:53.183269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-04-26 15:09:53.183299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.671 [2024-04-26 15:09:53.191900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.671 [2024-04-26 15:09:53.192241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-04-26 15:09:53.192269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.671 [2024-04-26 15:09:53.200518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.671 [2024-04-26 15:09:53.200930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-04-26 15:09:53.200962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.671 [2024-04-26 15:09:53.209052] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.671 [2024-04-26 15:09:53.209414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-04-26 15:09:53.209446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.671 [2024-04-26 15:09:53.217724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.671 [2024-04-26 15:09:53.218157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-04-26 15:09:53.218183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.671 [2024-04-26 15:09:53.226864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.671 [2024-04-26 15:09:53.227269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-04-26 15:09:53.227311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.671 [2024-04-26 15:09:53.235980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.671 [2024-04-26 15:09:53.236396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-04-26 15:09:53.236428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.671 [2024-04-26 15:09:53.244718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.671 [2024-04-26 15:09:53.245168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-04-26 15:09:53.245204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.671 [2024-04-26 15:09:53.253642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.671 [2024-04-26 15:09:53.253981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-04-26 15:09:53.254012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.671 [2024-04-26 15:09:53.260989] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.671 [2024-04-26 15:09:53.261445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-04-26 15:09:53.261480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.671 [2024-04-26 15:09:53.268577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.671 [2024-04-26 15:09:53.268917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-04-26 15:09:53.268948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.671 [2024-04-26 15:09:53.276016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.671 [2024-04-26 15:09:53.276453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.671 [2024-04-26 15:09:53.276485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.671 [2024-04-26 15:09:53.284711] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.672 [2024-04-26 15:09:53.285091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-04-26 15:09:53.285120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.672 [2024-04-26 15:09:53.292895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.672 [2024-04-26 15:09:53.293284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-04-26 15:09:53.293328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.672 [2024-04-26 15:09:53.300441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.672 [2024-04-26 15:09:53.300545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-04-26 15:09:53.300584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.672 [2024-04-26 15:09:53.308133] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.672 [2024-04-26 15:09:53.308555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-04-26 15:09:53.308587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.672 [2024-04-26 15:09:53.315544] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.672 [2024-04-26 15:09:53.315883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-04-26 15:09:53.315913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.672 [2024-04-26 15:09:53.322686] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.672 [2024-04-26 15:09:53.323127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-04-26 15:09:53.323158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.672 [2024-04-26 15:09:53.330901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.672 [2024-04-26 15:09:53.331241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-04-26 15:09:53.331269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.672 [2024-04-26 15:09:53.340253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.672 [2024-04-26 15:09:53.340581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-04-26 15:09:53.340612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.672 [2024-04-26 15:09:53.349513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.672 [2024-04-26 15:09:53.349859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-04-26 15:09:53.349902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.672 [2024-04-26 15:09:53.359094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.672 [2024-04-26 15:09:53.359550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-04-26 15:09:53.359585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.672 [2024-04-26 15:09:53.369080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.672 [2024-04-26 15:09:53.369520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-04-26 15:09:53.369561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.672 [2024-04-26 15:09:53.378812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.672 [2024-04-26 15:09:53.379171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-04-26 15:09:53.379198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.672 [2024-04-26 15:09:53.389112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.672 [2024-04-26 15:09:53.389438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-04-26 15:09:53.389470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.672 [2024-04-26 15:09:53.399620] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.672 [2024-04-26 15:09:53.400070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-04-26 15:09:53.400103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.672 [2024-04-26 15:09:53.409583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.672 [2024-04-26 15:09:53.410015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.672 [2024-04-26 15:09:53.410086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.931 [2024-04-26 15:09:53.419836] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.931 [2024-04-26 15:09:53.420187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.931 [2024-04-26 15:09:53.420217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.931 [2024-04-26 15:09:53.430140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.931 [2024-04-26 15:09:53.430586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.931 [2024-04-26 15:09:53.430619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.931 [2024-04-26 15:09:53.439161] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.931 [2024-04-26 15:09:53.439520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.931 [2024-04-26 15:09:53.439552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.931 [2024-04-26 15:09:53.447850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.931 [2024-04-26 15:09:53.448270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.931 [2024-04-26 15:09:53.448314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.931 [2024-04-26 15:09:53.457094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.931 [2024-04-26 15:09:53.457468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.931 [2024-04-26 15:09:53.457500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.931 [2024-04-26 15:09:53.466436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.931 [2024-04-26 15:09:53.466780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.931 [2024-04-26 15:09:53.466811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.931 [2024-04-26 15:09:53.475438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.931 [2024-04-26 15:09:53.475799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.931 [2024-04-26 15:09:53.475830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.931 [2024-04-26 15:09:53.484530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.931 [2024-04-26 15:09:53.484957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.931 [2024-04-26 15:09:53.484988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.931 [2024-04-26 15:09:53.493792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.931 [2024-04-26 15:09:53.494231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.931 [2024-04-26 15:09:53.494257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.931 [2024-04-26 15:09:53.502886] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.931 [2024-04-26 15:09:53.503226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.931 [2024-04-26 15:09:53.503254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.931 [2024-04-26 15:09:53.512255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.931 [2024-04-26 15:09:53.512702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.931 [2024-04-26 15:09:53.512734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.931 [2024-04-26 15:09:53.521389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.931 [2024-04-26 15:09:53.521744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.931 [2024-04-26 15:09:53.521776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.931 [2024-04-26 15:09:53.530830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.931 [2024-04-26 15:09:53.531261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.931 [2024-04-26 15:09:53.531288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.931 [2024-04-26 15:09:53.539859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.931 [2024-04-26 15:09:53.540202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.931 [2024-04-26 15:09:53.540229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.931 [2024-04-26 15:09:53.547686] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.931 [2024-04-26 15:09:53.548143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.931 [2024-04-26 15:09:53.548170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.931 [2024-04-26 15:09:53.555536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.931 [2024-04-26 15:09:53.555878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.931 [2024-04-26 15:09:53.555910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.931 [2024-04-26 15:09:53.563370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.931 [2024-04-26 15:09:53.563789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.932 [2024-04-26 15:09:53.563820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.932 [2024-04-26 15:09:53.571759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.932 [2024-04-26 15:09:53.572118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.932 [2024-04-26 15:09:53.572145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.932 [2024-04-26 15:09:53.580120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.932 [2024-04-26 15:09:53.580551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.932 [2024-04-26 15:09:53.580593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.932 [2024-04-26 15:09:53.589412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.932 [2024-04-26 15:09:53.589771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.932 [2024-04-26 15:09:53.589812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.932 [2024-04-26 15:09:53.597370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.932 [2024-04-26 15:09:53.597795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.932 [2024-04-26 15:09:53.597826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.932 [2024-04-26 15:09:53.606128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.932 [2024-04-26 15:09:53.606478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.932 [2024-04-26 15:09:53.606509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.932 [2024-04-26 15:09:53.614801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.932 [2024-04-26 15:09:53.615154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.932 [2024-04-26 15:09:53.615181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.932 [2024-04-26 15:09:53.623921] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.932 [2024-04-26 15:09:53.624323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.932 [2024-04-26 15:09:53.624354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.932 [2024-04-26 15:09:53.631190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.932 [2024-04-26 15:09:53.631555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.932 [2024-04-26 15:09:53.631587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.932 [2024-04-26 15:09:53.638224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.932 [2024-04-26 15:09:53.638590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.932 [2024-04-26 15:09:53.638622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.932 [2024-04-26 15:09:53.645200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.932 [2024-04-26 15:09:53.645557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.932 [2024-04-26 15:09:53.645588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.932 [2024-04-26 15:09:53.652683] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.932 [2024-04-26 15:09:53.653043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.932 [2024-04-26 15:09:53.653085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.932 [2024-04-26 15:09:53.659977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.932 [2024-04-26 15:09:53.660387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.932 [2024-04-26 15:09:53.660428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.932 [2024-04-26 15:09:53.667534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:07.932 [2024-04-26 15:09:53.667870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.932 [2024-04-26 15:09:53.667903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.191 [2024-04-26 15:09:53.675720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.191 [2024-04-26 15:09:53.676160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.191 [2024-04-26 15:09:53.676195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.191 [2024-04-26 15:09:53.683649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.191 [2024-04-26 15:09:53.684079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.191 [2024-04-26 15:09:53.684108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.191 [2024-04-26 15:09:53.690934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.191 [2024-04-26 15:09:53.691263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.191 [2024-04-26 15:09:53.691292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.191 [2024-04-26 15:09:53.698203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.191 [2024-04-26 15:09:53.698629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.191 [2024-04-26 15:09:53.698670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.191 [2024-04-26 15:09:53.706525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.191 [2024-04-26 15:09:53.706874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.191 [2024-04-26 15:09:53.706905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.191 [2024-04-26 15:09:53.714644] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.191 [2024-04-26 15:09:53.714994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.191 [2024-04-26 15:09:53.715035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.191 [2024-04-26 15:09:53.722299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.191 [2024-04-26 15:09:53.722656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.191 [2024-04-26 15:09:53.722687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.191 [2024-04-26 15:09:53.730661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.191 [2024-04-26 15:09:53.730992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.191 [2024-04-26 15:09:53.731034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.191 [2024-04-26 15:09:53.737862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.191 [2024-04-26 15:09:53.738199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.191 [2024-04-26 15:09:53.738227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.191 [2024-04-26 15:09:53.744595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.191 [2024-04-26 15:09:53.744924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.191 [2024-04-26 15:09:53.744955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.191 [2024-04-26 15:09:53.751849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.191 [2024-04-26 15:09:53.752228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.191 [2024-04-26 15:09:53.752271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.191 [2024-04-26 15:09:53.759777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.191 [2024-04-26 15:09:53.760158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.191 [2024-04-26 15:09:53.760200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.191 [2024-04-26 15:09:53.768482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.191 [2024-04-26 15:09:53.768685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.191 [2024-04-26 15:09:53.768717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.191 [2024-04-26 15:09:53.777197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.191 [2024-04-26 15:09:53.777550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.191 [2024-04-26 15:09:53.777582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.191 [2024-04-26 15:09:53.784180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.191 [2024-04-26 15:09:53.784529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.784561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.790994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.791305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.791355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.798622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.798952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.798983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.806531] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.806860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.806891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.815236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.815583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.815614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.823261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.823627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.823658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.830121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.830454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.830486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.837015] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.837365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.837396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.843989] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.844336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.844368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.852088] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.852479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.852511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.859320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.859668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.859700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.865871] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.866200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.866228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.872226] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.872568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.872600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.878527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.878850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.878882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.884843] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.885211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.885238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.891676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.892096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.892138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.899561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.899926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.899957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.906173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.906507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.906539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.912560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.912882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.912914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.919004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.919326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.919368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.192 [2024-04-26 15:09:53.925842] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.192 [2024-04-26 15:09:53.926191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.192 [2024-04-26 15:09:53.926233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:53.932702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:53.933044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:53.933092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:53.940608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:53.940969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:53.941001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:53.947446] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:53.947813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:53.947845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:53.955603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:53.955940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:53.955972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:53.962938] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:53.963265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:53.963294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:53.969838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:53.970243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:53.970286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:53.976655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:53.976984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:53.977030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:53.983227] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:53.983593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:53.983625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:53.990712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:53.991043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:53.991093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:53.997542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:53.997868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:53.997900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:54.003912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:54.004235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:54.004262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:54.010297] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:54.010639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:54.010671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:54.018253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:54.018698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:54.018730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:54.025458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:54.025887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:54.025918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:54.032517] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:54.032928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:54.032960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:54.039500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:54.039910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:54.039941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:54.048219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:54.048615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:54.048647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:54.055745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:54.056094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:54.056121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:54.062250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:54.062603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:54.062634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:54.069142] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:54.069485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:54.069517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:54.075870] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:54.076203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:54.076230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:54.082517] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.452 [2024-04-26 15:09:54.082845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.452 [2024-04-26 15:09:54.082877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.452 [2024-04-26 15:09:54.089530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.453 [2024-04-26 15:09:54.089839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.453 [2024-04-26 15:09:54.089867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.453 [2024-04-26 15:09:54.096095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.453 [2024-04-26 15:09:54.096442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.453 [2024-04-26 15:09:54.096474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.453 [2024-04-26 15:09:54.102805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.453 [2024-04-26 15:09:54.103149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.453 [2024-04-26 15:09:54.103178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.453 [2024-04-26 15:09:54.109473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.453 [2024-04-26 15:09:54.109839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.453 [2024-04-26 15:09:54.109870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.453 [2024-04-26 15:09:54.116360] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.453 [2024-04-26 15:09:54.116671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.453 [2024-04-26 15:09:54.116698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.453 [2024-04-26 15:09:54.124417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.453 [2024-04-26 15:09:54.124730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.453 [2024-04-26 15:09:54.124756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.453 [2024-04-26 15:09:54.131277] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.453 [2024-04-26 15:09:54.131684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.453 [2024-04-26 15:09:54.131725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.453 [2024-04-26 15:09:54.137644] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.453 [2024-04-26 15:09:54.138052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.453 [2024-04-26 15:09:54.138081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.453 [2024-04-26 15:09:54.145583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.453 [2024-04-26 15:09:54.145786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.453 [2024-04-26 15:09:54.145813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.453 [2024-04-26 15:09:54.153116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.453 [2024-04-26 15:09:54.153529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.453 [2024-04-26 15:09:54.153555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.453 [2024-04-26 15:09:54.162283] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.453 [2024-04-26 15:09:54.162605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.453 [2024-04-26 15:09:54.162639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.453 [2024-04-26 15:09:54.171313] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.453 [2024-04-26 15:09:54.171640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.453 [2024-04-26 15:09:54.171667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.453 [2024-04-26 15:09:54.180703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.453 [2024-04-26 15:09:54.180882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.453 [2024-04-26 15:09:54.180909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.453 [2024-04-26 15:09:54.189839] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.453 [2024-04-26 15:09:54.190273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.453 [2024-04-26 15:09:54.190319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.712 [2024-04-26 15:09:54.198697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.712 [2024-04-26 15:09:54.199124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.712 [2024-04-26 15:09:54.199154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.712 [2024-04-26 15:09:54.206550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.712 [2024-04-26 15:09:54.206856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.712 [2024-04-26 15:09:54.206884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.712 [2024-04-26 15:09:54.213911] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.712 [2024-04-26 15:09:54.214264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.712 [2024-04-26 15:09:54.214292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.712 [2024-04-26 15:09:54.221425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.221732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.221760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.228979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.229322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.229351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.236760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.237092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.237121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.244654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.244961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.244988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.252866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.253209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.253239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.260824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.261157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.261185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.268054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.268500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.268526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.274854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.275202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.275231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.282451] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.282777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.282810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.290646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.291087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.291124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.299664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.299874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.299908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.309219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.309538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.309564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.318255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.318650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.318695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.326549] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.326861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.326888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.333593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.333980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.334027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.341467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.341790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.341821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.348888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.349218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.349246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.356137] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.356444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.356470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.362796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.363127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.363155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.369845] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.370267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.370317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.376789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.377118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.377146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.383503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.383804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.383831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.390494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.390916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.390942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.397796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.398248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.398280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.404849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.405182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.405210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.412245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.713 [2024-04-26 15:09:54.412585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.713 [2024-04-26 15:09:54.412612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.713 [2024-04-26 15:09:54.418729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf56500) with pdu=0x2000190fef90 00:29:08.714 [2024-04-26 15:09:54.418866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.714 [2024-04-26 15:09:54.418893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.714 00:29:08.714 Latency(us) 00:29:08.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.714 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:08.714 nvme0n1 : 2.00 3978.94 497.37 0.00 0.00 4012.72 2354.44 11116.85 00:29:08.714 =================================================================================================================== 00:29:08.714 Total : 3978.94 497.37 0.00 0.00 4012.72 2354.44 11116.85 00:29:08.714 0 00:29:08.714 15:09:54 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:08.714 15:09:54 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:08.714 15:09:54 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:08.714 | .driver_specific 00:29:08.714 | .nvme_error 00:29:08.714 | .status_code 00:29:08.714 | .command_transient_transport_error' 00:29:08.714 15:09:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:08.972 15:09:54 -- host/digest.sh@71 -- # (( 257 > 0 )) 00:29:08.972 15:09:54 -- host/digest.sh@73 -- # killprocess 3900341 00:29:08.972 15:09:54 -- common/autotest_common.sh@936 -- # '[' -z 3900341 ']' 00:29:08.972 15:09:54 -- common/autotest_common.sh@940 -- # kill -0 3900341 00:29:08.972 15:09:54 -- common/autotest_common.sh@941 -- # uname 00:29:08.972 15:09:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:08.972 15:09:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3900341 00:29:08.972 15:09:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:08.972 15:09:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:08.972 15:09:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3900341' 00:29:08.972 killing process with pid 3900341 00:29:08.972 15:09:54 -- common/autotest_common.sh@955 -- # kill 3900341 00:29:08.972 Received shutdown signal, test time was about 2.000000 seconds 00:29:08.972 00:29:08.972 Latency(us) 00:29:08.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.972 =================================================================================================================== 00:29:08.972 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:08.972 15:09:54 -- common/autotest_common.sh@960 -- # wait 3900341 00:29:09.229 15:09:54 -- host/digest.sh@116 -- # killprocess 3898974 00:29:09.229 15:09:54 -- common/autotest_common.sh@936 -- # '[' -z 3898974 ']' 00:29:09.229 15:09:54 -- common/autotest_common.sh@940 -- # kill -0 3898974 00:29:09.229 15:09:54 -- common/autotest_common.sh@941 -- # uname 00:29:09.229 15:09:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:09.229 15:09:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3898974 00:29:09.229 15:09:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:09.229 15:09:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:09.229 15:09:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3898974' 00:29:09.229 killing process with pid 3898974 00:29:09.229 15:09:54 -- common/autotest_common.sh@955 -- # kill 3898974 00:29:09.229 15:09:54 -- common/autotest_common.sh@960 -- # wait 3898974 00:29:09.488 00:29:09.488 real 0m15.170s 00:29:09.488 user 0m29.456s 00:29:09.488 sys 0m4.864s 00:29:09.488 15:09:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:09.488 15:09:55 -- common/autotest_common.sh@10 -- # set +x 00:29:09.488 ************************************ 00:29:09.488 END TEST nvmf_digest_error 00:29:09.488 ************************************ 00:29:09.488 15:09:55 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:09.488 15:09:55 -- host/digest.sh@150 -- # nvmftestfini 00:29:09.488 15:09:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:09.488 15:09:55 -- nvmf/common.sh@117 -- # sync 00:29:09.488 15:09:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:09.488 15:09:55 -- nvmf/common.sh@120 -- # set +e 00:29:09.488 15:09:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:09.488 15:09:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:09.488 rmmod nvme_tcp 00:29:09.488 rmmod nvme_fabrics 00:29:09.747 rmmod nvme_keyring 00:29:09.747 15:09:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:09.747 15:09:55 -- nvmf/common.sh@124 -- # set -e 00:29:09.747 15:09:55 -- nvmf/common.sh@125 -- # return 0 00:29:09.747 15:09:55 -- nvmf/common.sh@478 -- # '[' -n 3898974 ']' 00:29:09.747 15:09:55 -- nvmf/common.sh@479 -- # killprocess 3898974 00:29:09.747 15:09:55 -- common/autotest_common.sh@936 -- # '[' -z 3898974 ']' 00:29:09.747 15:09:55 -- common/autotest_common.sh@940 -- # kill -0 3898974 00:29:09.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3898974) - No such process 00:29:09.747 15:09:55 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3898974 is not found' 00:29:09.747 Process with pid 3898974 is not found 00:29:09.747 15:09:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:09.747 15:09:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:09.747 15:09:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:09.747 15:09:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:09.747 15:09:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:09.747 15:09:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.747 15:09:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:09.747 15:09:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.647 15:09:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:11.647 00:29:11.647 real 0m34.963s 00:29:11.647 user 0m59.862s 00:29:11.647 sys 0m11.235s 00:29:11.647 15:09:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:11.647 15:09:57 -- common/autotest_common.sh@10 -- # set +x 00:29:11.647 ************************************ 00:29:11.647 END TEST nvmf_digest 00:29:11.647 ************************************ 00:29:11.647 15:09:57 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:29:11.647 15:09:57 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:29:11.647 15:09:57 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:29:11.647 15:09:57 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:11.647 15:09:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:11.647 15:09:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:11.647 15:09:57 -- common/autotest_common.sh@10 -- # set +x 00:29:11.906 ************************************ 00:29:11.906 START TEST nvmf_bdevperf 00:29:11.906 ************************************ 00:29:11.906 15:09:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:11.906 * Looking for test storage... 00:29:11.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:11.906 15:09:57 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.906 15:09:57 -- nvmf/common.sh@7 -- # uname -s 00:29:11.906 15:09:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.906 15:09:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.906 15:09:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.906 15:09:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.906 15:09:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.906 15:09:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.906 15:09:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.906 15:09:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.906 15:09:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.906 15:09:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.906 15:09:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:11.906 15:09:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:11.906 15:09:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.906 15:09:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.906 15:09:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.906 15:09:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.906 15:09:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.906 15:09:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.906 15:09:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.906 15:09:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.906 15:09:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.906 15:09:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.906 15:09:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.906 15:09:57 -- paths/export.sh@5 -- # export PATH 00:29:11.906 15:09:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.906 15:09:57 -- nvmf/common.sh@47 -- # : 0 00:29:11.906 15:09:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:11.906 15:09:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:11.906 15:09:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.906 15:09:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.906 15:09:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.906 15:09:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:11.906 15:09:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:11.906 15:09:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:11.906 15:09:57 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:11.906 15:09:57 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:11.906 15:09:57 -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:11.906 15:09:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:11.906 15:09:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.906 15:09:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:11.906 15:09:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:11.906 15:09:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:11.906 15:09:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.906 15:09:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:11.906 15:09:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.906 15:09:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:29:11.906 15:09:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:11.906 15:09:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:11.906 15:09:57 -- common/autotest_common.sh@10 -- # set +x 00:29:13.810 15:09:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:13.810 15:09:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:13.810 15:09:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:13.810 15:09:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:13.810 15:09:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:13.810 15:09:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:13.810 15:09:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:13.810 15:09:59 -- nvmf/common.sh@295 -- # net_devs=() 00:29:13.810 15:09:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:13.810 15:09:59 -- nvmf/common.sh@296 -- # e810=() 00:29:13.810 15:09:59 -- nvmf/common.sh@296 -- # local -ga e810 00:29:13.810 15:09:59 -- nvmf/common.sh@297 -- # x722=() 00:29:13.810 15:09:59 -- nvmf/common.sh@297 -- # local -ga x722 00:29:13.810 15:09:59 -- nvmf/common.sh@298 -- # mlx=() 00:29:13.810 15:09:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:13.810 15:09:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.810 15:09:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.810 15:09:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.810 15:09:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.810 15:09:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.810 15:09:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.810 15:09:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.810 15:09:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.810 15:09:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.810 15:09:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.810 15:09:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.810 15:09:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:13.810 15:09:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:13.810 15:09:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:13.810 15:09:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:13.810 15:09:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:13.811 15:09:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:13.811 15:09:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:13.811 15:09:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:13.811 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:13.811 15:09:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:13.811 15:09:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:13.811 15:09:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.811 15:09:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.811 15:09:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:13.811 15:09:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:13.811 15:09:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:13.811 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:13.811 15:09:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:13.811 15:09:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:13.811 15:09:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.811 15:09:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.811 15:09:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:13.811 15:09:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:13.811 15:09:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:13.811 15:09:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:13.811 15:09:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:13.811 15:09:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.811 15:09:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:13.811 15:09:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.811 15:09:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:13.811 Found net devices under 0000:84:00.0: cvl_0_0 00:29:13.811 15:09:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.811 15:09:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:13.811 15:09:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.811 15:09:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:13.811 15:09:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.811 15:09:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:13.811 Found net devices under 0000:84:00.1: cvl_0_1 00:29:13.811 15:09:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.811 15:09:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:13.811 15:09:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:13.811 15:09:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:13.811 15:09:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:13.811 15:09:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:13.811 15:09:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.811 15:09:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.811 15:09:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.811 15:09:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:13.811 15:09:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.811 15:09:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.811 15:09:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:13.811 15:09:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.811 15:09:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.811 15:09:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:13.811 15:09:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:13.811 15:09:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.811 15:09:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.070 15:09:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.070 15:09:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.070 15:09:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:14.070 15:09:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.070 15:09:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.070 15:09:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.070 15:09:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:14.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:29:14.070 00:29:14.070 --- 10.0.0.2 ping statistics --- 00:29:14.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.070 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:29:14.070 15:09:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:29:14.070 00:29:14.070 --- 10.0.0.1 ping statistics --- 00:29:14.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.070 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:29:14.070 15:09:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.070 15:09:59 -- nvmf/common.sh@411 -- # return 0 00:29:14.070 15:09:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:14.070 15:09:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.070 15:09:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:14.070 15:09:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:14.070 15:09:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.070 15:09:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:14.070 15:09:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:14.070 15:09:59 -- host/bdevperf.sh@25 -- # tgt_init 00:29:14.070 15:09:59 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:14.070 15:09:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:14.070 15:09:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:14.070 15:09:59 -- common/autotest_common.sh@10 -- # set +x 00:29:14.070 15:09:59 -- nvmf/common.sh@470 -- # nvmfpid=3902830 00:29:14.070 15:09:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:14.070 15:09:59 -- nvmf/common.sh@471 -- # waitforlisten 3902830 00:29:14.070 15:09:59 -- common/autotest_common.sh@817 -- # '[' -z 3902830 ']' 00:29:14.070 15:09:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.070 15:09:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:14.070 15:09:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.070 15:09:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:14.070 15:09:59 -- common/autotest_common.sh@10 -- # set +x 00:29:14.070 [2024-04-26 15:09:59.706873] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:29:14.070 [2024-04-26 15:09:59.706973] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.070 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.071 [2024-04-26 15:09:59.746552] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:14.071 [2024-04-26 15:09:59.773422] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:14.329 [2024-04-26 15:09:59.863698] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.329 [2024-04-26 15:09:59.863773] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.329 [2024-04-26 15:09:59.863802] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.329 [2024-04-26 15:09:59.863813] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.329 [2024-04-26 15:09:59.863823] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.329 [2024-04-26 15:09:59.863957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:14.329 [2024-04-26 15:09:59.864029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:14.329 [2024-04-26 15:09:59.864030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.329 15:09:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:14.329 15:09:59 -- common/autotest_common.sh@850 -- # return 0 00:29:14.329 15:09:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:14.329 15:09:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:14.329 15:09:59 -- common/autotest_common.sh@10 -- # set +x 00:29:14.329 15:10:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.329 15:10:00 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:14.329 15:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.329 15:10:00 -- common/autotest_common.sh@10 -- # set +x 00:29:14.329 [2024-04-26 15:10:00.009569] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.329 15:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.329 15:10:00 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:14.329 15:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.329 15:10:00 -- common/autotest_common.sh@10 -- # set +x 00:29:14.329 Malloc0 00:29:14.329 15:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.329 15:10:00 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:14.329 15:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.329 15:10:00 -- common/autotest_common.sh@10 -- # set +x 00:29:14.329 15:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.329 15:10:00 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:14.329 15:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.329 15:10:00 -- common/autotest_common.sh@10 -- # set +x 00:29:14.587 15:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.587 15:10:00 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:14.587 15:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.587 15:10:00 -- common/autotest_common.sh@10 -- # set +x 00:29:14.587 [2024-04-26 15:10:00.074340] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.587 15:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.587 15:10:00 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:14.587 15:10:00 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:14.587 15:10:00 -- nvmf/common.sh@521 -- # config=() 00:29:14.587 15:10:00 -- nvmf/common.sh@521 -- # local subsystem config 00:29:14.587 15:10:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:14.587 15:10:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:14.587 { 00:29:14.587 "params": { 00:29:14.587 "name": "Nvme$subsystem", 00:29:14.587 "trtype": "$TEST_TRANSPORT", 00:29:14.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.587 "adrfam": "ipv4", 00:29:14.587 "trsvcid": "$NVMF_PORT", 00:29:14.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.587 "hdgst": ${hdgst:-false}, 00:29:14.587 "ddgst": ${ddgst:-false} 00:29:14.587 }, 00:29:14.587 "method": "bdev_nvme_attach_controller" 00:29:14.587 } 00:29:14.587 EOF 00:29:14.587 )") 00:29:14.587 15:10:00 -- nvmf/common.sh@543 -- # cat 00:29:14.587 15:10:00 -- nvmf/common.sh@545 -- # jq . 00:29:14.587 15:10:00 -- nvmf/common.sh@546 -- # IFS=, 00:29:14.587 15:10:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:14.587 "params": { 00:29:14.587 "name": "Nvme1", 00:29:14.587 "trtype": "tcp", 00:29:14.587 "traddr": "10.0.0.2", 00:29:14.587 "adrfam": "ipv4", 00:29:14.587 "trsvcid": "4420", 00:29:14.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:14.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:14.588 "hdgst": false, 00:29:14.588 "ddgst": false 00:29:14.588 }, 00:29:14.588 "method": "bdev_nvme_attach_controller" 00:29:14.588 }' 00:29:14.588 [2024-04-26 15:10:00.122707] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:29:14.588 [2024-04-26 15:10:00.122775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902861 ] 00:29:14.588 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.588 [2024-04-26 15:10:00.154796] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:14.588 [2024-04-26 15:10:00.184423] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.588 [2024-04-26 15:10:00.276506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.846 Running I/O for 1 seconds... 00:29:15.780 00:29:15.780 Latency(us) 00:29:15.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.780 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:15.780 Verification LBA range: start 0x0 length 0x4000 00:29:15.780 Nvme1n1 : 1.01 8454.71 33.03 0.00 0.00 15078.78 1395.67 15922.82 00:29:15.780 =================================================================================================================== 00:29:15.780 Total : 8454.71 33.03 0.00 0.00 15078.78 1395.67 15922.82 00:29:16.039 15:10:01 -- host/bdevperf.sh@30 -- # bdevperfpid=3903000 00:29:16.039 15:10:01 -- host/bdevperf.sh@32 -- # sleep 3 00:29:16.039 15:10:01 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:16.039 15:10:01 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:16.039 15:10:01 -- nvmf/common.sh@521 -- # config=() 00:29:16.039 15:10:01 -- nvmf/common.sh@521 -- # local subsystem config 00:29:16.039 15:10:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:16.039 15:10:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:16.039 { 00:29:16.039 "params": { 00:29:16.039 "name": "Nvme$subsystem", 00:29:16.039 "trtype": "$TEST_TRANSPORT", 00:29:16.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.039 "adrfam": "ipv4", 00:29:16.039 "trsvcid": "$NVMF_PORT", 00:29:16.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.039 "hdgst": ${hdgst:-false}, 00:29:16.039 "ddgst": ${ddgst:-false} 00:29:16.039 }, 00:29:16.039 "method": "bdev_nvme_attach_controller" 00:29:16.039 } 00:29:16.039 EOF 00:29:16.039 )") 00:29:16.039 15:10:01 -- nvmf/common.sh@543 -- # cat 00:29:16.039 15:10:01 -- nvmf/common.sh@545 -- # jq . 00:29:16.039 15:10:01 -- nvmf/common.sh@546 -- # IFS=, 00:29:16.039 15:10:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:16.039 "params": { 00:29:16.039 "name": "Nvme1", 00:29:16.039 "trtype": "tcp", 00:29:16.039 "traddr": "10.0.0.2", 00:29:16.039 "adrfam": "ipv4", 00:29:16.039 "trsvcid": "4420", 00:29:16.039 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:16.039 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:16.039 "hdgst": false, 00:29:16.039 "ddgst": false 00:29:16.039 }, 00:29:16.039 "method": "bdev_nvme_attach_controller" 00:29:16.039 }' 00:29:16.039 [2024-04-26 15:10:01.741667] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:29:16.039 [2024-04-26 15:10:01.741749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903000 ] 00:29:16.039 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.039 [2024-04-26 15:10:01.776177] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:16.298 [2024-04-26 15:10:01.807343] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.298 [2024-04-26 15:10:01.892516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.558 Running I/O for 15 seconds... 00:29:19.094 15:10:04 -- host/bdevperf.sh@33 -- # kill -9 3902830 00:29:19.094 15:10:04 -- host/bdevperf.sh@35 -- # sleep 3 00:29:19.094 [2024-04-26 15:10:04.709164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.709977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.709994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.094 [2024-04-26 15:10:04.710668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.094 [2024-04-26 15:10:04.710686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.710701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.710718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.710733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.710750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.710766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.710783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.710798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.710815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.710829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.710847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.710862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.710879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.710894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.710912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.710927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.710944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.710960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.710976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.710992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.095 [2024-04-26 15:10:04.711933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.095 [2024-04-26 15:10:04.711966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.711983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.711998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.712028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.712047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.712079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.712094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.712109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.712123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.712138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.712152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.712167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.712181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.712196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.712210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.712225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.712239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.712255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.712270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.712284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.712317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.095 [2024-04-26 15:10:04.712333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.095 [2024-04-26 15:10:04.712347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.712379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.712394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.712410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.712425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.712442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.712458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.712474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.712489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.712506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.712521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.712538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.712553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.712575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.712591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.712608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.712623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.712640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.712656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.712672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.712688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.712705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.712720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.712741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.712757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.712774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.712789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.712806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.712822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.712838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.712854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.712871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.712887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.712904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.712919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.712936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.712952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.712968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.712984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.096 [2024-04-26 15:10:04.713016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.096 [2024-04-26 15:10:04.713056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.096 [2024-04-26 15:10:04.713103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.096 [2024-04-26 15:10:04.713133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.096 [2024-04-26 15:10:04.713165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.096 [2024-04-26 15:10:04.713195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.096 [2024-04-26 15:10:04.713224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.096 [2024-04-26 15:10:04.713253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.096 [2024-04-26 15:10:04.713282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.096 [2024-04-26 15:10:04.713333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.096 [2024-04-26 15:10:04.713368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.096 [2024-04-26 15:10:04.713400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.096 [2024-04-26 15:10:04.713432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.096 [2024-04-26 15:10:04.713464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.096 [2024-04-26 15:10:04.713496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.096 [2024-04-26 15:10:04.713528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d910 is same with the state(5) to be set 00:29:19.096 [2024-04-26 15:10:04.713562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:19.096 [2024-04-26 15:10:04.713575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:19.096 [2024-04-26 15:10:04.713592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50152 len:8 PRP1 0x0 PRP2 0x0 00:29:19.096 [2024-04-26 15:10:04.713607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713672] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e5d910 was disconnected and freed. reset controller. 00:29:19.096 [2024-04-26 15:10:04.713751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:19.096 [2024-04-26 15:10:04.713775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:19.096 [2024-04-26 15:10:04.713807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:19.096 [2024-04-26 15:10:04.713844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:19.096 [2024-04-26 15:10:04.713875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.096 [2024-04-26 15:10:04.713889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.096 [2024-04-26 15:10:04.717610] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.096 [2024-04-26 15:10:04.717652] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.096 [2024-04-26 15:10:04.718373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.096 [2024-04-26 15:10:04.718531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.096 [2024-04-26 15:10:04.718559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.096 [2024-04-26 15:10:04.718577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.096 [2024-04-26 15:10:04.718815] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.096 [2024-04-26 15:10:04.719081] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.096 [2024-04-26 15:10:04.719104] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.096 [2024-04-26 15:10:04.719120] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.096 [2024-04-26 15:10:04.722702] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.096 [2024-04-26 15:10:04.731734] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.096 [2024-04-26 15:10:04.732156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.096 [2024-04-26 15:10:04.732326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.096 [2024-04-26 15:10:04.732355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.096 [2024-04-26 15:10:04.732372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.096 [2024-04-26 15:10:04.732609] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.096 [2024-04-26 15:10:04.732856] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.097 [2024-04-26 15:10:04.732881] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.097 [2024-04-26 15:10:04.732896] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.097 [2024-04-26 15:10:04.736400] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.097 [2024-04-26 15:10:04.745598] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.097 [2024-04-26 15:10:04.746011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.097 [2024-04-26 15:10:04.746195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.097 [2024-04-26 15:10:04.746225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.097 [2024-04-26 15:10:04.746242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.097 [2024-04-26 15:10:04.746479] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.097 [2024-04-26 15:10:04.746720] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.097 [2024-04-26 15:10:04.746744] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.097 [2024-04-26 15:10:04.746759] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.097 [2024-04-26 15:10:04.750319] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.097 [2024-04-26 15:10:04.759492] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.097 [2024-04-26 15:10:04.759898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.097 [2024-04-26 15:10:04.760091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.097 [2024-04-26 15:10:04.760119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.097 [2024-04-26 15:10:04.760135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.097 [2024-04-26 15:10:04.760369] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.097 [2024-04-26 15:10:04.760610] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.097 [2024-04-26 15:10:04.760634] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.097 [2024-04-26 15:10:04.760650] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.097 [2024-04-26 15:10:04.764224] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.097 [2024-04-26 15:10:04.773494] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.097 [2024-04-26 15:10:04.773877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.097 [2024-04-26 15:10:04.774075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.097 [2024-04-26 15:10:04.774102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.097 [2024-04-26 15:10:04.774118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.097 [2024-04-26 15:10:04.774350] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.097 [2024-04-26 15:10:04.774591] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.097 [2024-04-26 15:10:04.774623] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.097 [2024-04-26 15:10:04.774639] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.097 [2024-04-26 15:10:04.778189] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.097 [2024-04-26 15:10:04.787408] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.097 [2024-04-26 15:10:04.787814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.097 [2024-04-26 15:10:04.787988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.097 [2024-04-26 15:10:04.788016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.097 [2024-04-26 15:10:04.788046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.097 [2024-04-26 15:10:04.788283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.097 [2024-04-26 15:10:04.788524] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.097 [2024-04-26 15:10:04.788547] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.097 [2024-04-26 15:10:04.788562] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.097 [2024-04-26 15:10:04.792110] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.097 [2024-04-26 15:10:04.801307] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.097 [2024-04-26 15:10:04.801719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.097 [2024-04-26 15:10:04.801898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.097 [2024-04-26 15:10:04.801927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.097 [2024-04-26 15:10:04.801944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.097 [2024-04-26 15:10:04.802192] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.097 [2024-04-26 15:10:04.802433] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.097 [2024-04-26 15:10:04.802457] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.097 [2024-04-26 15:10:04.802472] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.097 [2024-04-26 15:10:04.806013] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.097 [2024-04-26 15:10:04.815242] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.097 [2024-04-26 15:10:04.815619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.097 [2024-04-26 15:10:04.815796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.097 [2024-04-26 15:10:04.815825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.097 [2024-04-26 15:10:04.815842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.097 [2024-04-26 15:10:04.816089] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.097 [2024-04-26 15:10:04.816330] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.097 [2024-04-26 15:10:04.816354] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.097 [2024-04-26 15:10:04.816375] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.097 [2024-04-26 15:10:04.819917] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.403 [2024-04-26 15:10:04.829127] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.403 [2024-04-26 15:10:04.829543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.403 [2024-04-26 15:10:04.829679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.403 [2024-04-26 15:10:04.829707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.403 [2024-04-26 15:10:04.829725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.403 [2024-04-26 15:10:04.829962] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.403 [2024-04-26 15:10:04.830214] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.403 [2024-04-26 15:10:04.830239] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.403 [2024-04-26 15:10:04.830254] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.403 [2024-04-26 15:10:04.833795] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.403 [2024-04-26 15:10:04.843005] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.403 [2024-04-26 15:10:04.843394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.403 [2024-04-26 15:10:04.843539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.403 [2024-04-26 15:10:04.843568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.403 [2024-04-26 15:10:04.843585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.403 [2024-04-26 15:10:04.843822] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.403 [2024-04-26 15:10:04.844074] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.403 [2024-04-26 15:10:04.844098] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.403 [2024-04-26 15:10:04.844113] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.403 [2024-04-26 15:10:04.847655] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.403 [2024-04-26 15:10:04.856859] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.403 [2024-04-26 15:10:04.857271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.403 [2024-04-26 15:10:04.857440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.403 [2024-04-26 15:10:04.857490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.403 [2024-04-26 15:10:04.857507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.403 [2024-04-26 15:10:04.857743] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.404 [2024-04-26 15:10:04.857984] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.404 [2024-04-26 15:10:04.858008] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.404 [2024-04-26 15:10:04.858033] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.404 [2024-04-26 15:10:04.861583] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.404 [2024-04-26 15:10:04.870780] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.404 [2024-04-26 15:10:04.871191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.404 [2024-04-26 15:10:04.871384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.404 [2024-04-26 15:10:04.871434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.404 [2024-04-26 15:10:04.871452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.404 [2024-04-26 15:10:04.871688] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.404 [2024-04-26 15:10:04.871929] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.404 [2024-04-26 15:10:04.871952] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.404 [2024-04-26 15:10:04.871967] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.404 [2024-04-26 15:10:04.875534] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.404 [2024-04-26 15:10:04.884730] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.404 [2024-04-26 15:10:04.885137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.404 [2024-04-26 15:10:04.885318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.404 [2024-04-26 15:10:04.885370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.404 [2024-04-26 15:10:04.885387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.404 [2024-04-26 15:10:04.885623] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.404 [2024-04-26 15:10:04.885864] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.404 [2024-04-26 15:10:04.885888] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.404 [2024-04-26 15:10:04.885903] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.404 [2024-04-26 15:10:04.889454] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.404 [2024-04-26 15:10:04.898651] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.404 [2024-04-26 15:10:04.899057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.404 [2024-04-26 15:10:04.899233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.404 [2024-04-26 15:10:04.899262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.404 [2024-04-26 15:10:04.899279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.404 [2024-04-26 15:10:04.899516] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.404 [2024-04-26 15:10:04.899756] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.404 [2024-04-26 15:10:04.899780] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.404 [2024-04-26 15:10:04.899795] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.404 [2024-04-26 15:10:04.903347] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.404 [2024-04-26 15:10:04.912561] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.404 [2024-04-26 15:10:04.912971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.404 [2024-04-26 15:10:04.913127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.404 [2024-04-26 15:10:04.913156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.404 [2024-04-26 15:10:04.913174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.404 [2024-04-26 15:10:04.913411] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.404 [2024-04-26 15:10:04.913651] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.404 [2024-04-26 15:10:04.913674] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.404 [2024-04-26 15:10:04.913689] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.404 [2024-04-26 15:10:04.917239] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.404 [2024-04-26 15:10:04.926444] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.404 [2024-04-26 15:10:04.926851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.404 [2024-04-26 15:10:04.927032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.404 [2024-04-26 15:10:04.927061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.404 [2024-04-26 15:10:04.927078] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.404 [2024-04-26 15:10:04.927315] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.404 [2024-04-26 15:10:04.927556] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.404 [2024-04-26 15:10:04.927579] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.404 [2024-04-26 15:10:04.927594] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.404 [2024-04-26 15:10:04.931144] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.404 [2024-04-26 15:10:04.940361] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.404 [2024-04-26 15:10:04.940767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.404 [2024-04-26 15:10:04.940916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.404 [2024-04-26 15:10:04.940944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.404 [2024-04-26 15:10:04.940962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.404 [2024-04-26 15:10:04.941210] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.404 [2024-04-26 15:10:04.941451] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.404 [2024-04-26 15:10:04.941475] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.404 [2024-04-26 15:10:04.941490] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.404 [2024-04-26 15:10:04.945038] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.404 [2024-04-26 15:10:04.954241] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.404 [2024-04-26 15:10:04.954658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.404 [2024-04-26 15:10:04.954803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.404 [2024-04-26 15:10:04.954832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.404 [2024-04-26 15:10:04.954849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.404 [2024-04-26 15:10:04.955098] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.404 [2024-04-26 15:10:04.955339] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.404 [2024-04-26 15:10:04.955363] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.404 [2024-04-26 15:10:04.955378] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.404 [2024-04-26 15:10:04.958919] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.404 [2024-04-26 15:10:04.968129] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.404 [2024-04-26 15:10:04.968544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.404 [2024-04-26 15:10:04.968694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.404 [2024-04-26 15:10:04.968723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.404 [2024-04-26 15:10:04.968740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.404 [2024-04-26 15:10:04.968977] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.404 [2024-04-26 15:10:04.969228] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.404 [2024-04-26 15:10:04.969252] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.404 [2024-04-26 15:10:04.969267] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.404 [2024-04-26 15:10:04.972809] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.404 [2024-04-26 15:10:04.982015] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.404 [2024-04-26 15:10:04.982395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.404 [2024-04-26 15:10:04.982532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.404 [2024-04-26 15:10:04.982560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.404 [2024-04-26 15:10:04.982578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.404 [2024-04-26 15:10:04.982814] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.404 [2024-04-26 15:10:04.983064] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.404 [2024-04-26 15:10:04.983088] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.404 [2024-04-26 15:10:04.983103] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.405 [2024-04-26 15:10:04.986644] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.405 [2024-04-26 15:10:04.995844] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.405 [2024-04-26 15:10:04.996252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.405 [2024-04-26 15:10:04.996415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.405 [2024-04-26 15:10:04.996468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.405 [2024-04-26 15:10:04.996486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.405 [2024-04-26 15:10:04.996722] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.405 [2024-04-26 15:10:04.996962] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.405 [2024-04-26 15:10:04.996985] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.405 [2024-04-26 15:10:04.997000] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.405 [2024-04-26 15:10:05.000551] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.405 [2024-04-26 15:10:05.009750] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.405 [2024-04-26 15:10:05.010159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.405 [2024-04-26 15:10:05.010367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.405 [2024-04-26 15:10:05.010418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.405 [2024-04-26 15:10:05.010436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.405 [2024-04-26 15:10:05.010672] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.405 [2024-04-26 15:10:05.010912] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.405 [2024-04-26 15:10:05.010937] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.405 [2024-04-26 15:10:05.010952] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.405 [2024-04-26 15:10:05.014501] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.405 [2024-04-26 15:10:05.023703] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.405 [2024-04-26 15:10:05.024080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.405 [2024-04-26 15:10:05.024230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.405 [2024-04-26 15:10:05.024259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.405 [2024-04-26 15:10:05.024276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.405 [2024-04-26 15:10:05.024512] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.405 [2024-04-26 15:10:05.024753] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.405 [2024-04-26 15:10:05.024777] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.405 [2024-04-26 15:10:05.024791] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.405 [2024-04-26 15:10:05.028343] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.405 [2024-04-26 15:10:05.037537] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.405 [2024-04-26 15:10:05.037934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.405 [2024-04-26 15:10:05.038104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.405 [2024-04-26 15:10:05.038162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.405 [2024-04-26 15:10:05.038184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.405 [2024-04-26 15:10:05.038422] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.405 [2024-04-26 15:10:05.038663] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.405 [2024-04-26 15:10:05.038686] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.405 [2024-04-26 15:10:05.038701] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.405 [2024-04-26 15:10:05.042252] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.405 [2024-04-26 15:10:05.051457] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.405 [2024-04-26 15:10:05.051844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.405 [2024-04-26 15:10:05.052017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.405 [2024-04-26 15:10:05.052060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.405 [2024-04-26 15:10:05.052077] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.405 [2024-04-26 15:10:05.052313] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.405 [2024-04-26 15:10:05.052554] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.405 [2024-04-26 15:10:05.052577] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.405 [2024-04-26 15:10:05.052592] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.405 [2024-04-26 15:10:05.056139] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.405 [2024-04-26 15:10:05.065338] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.405 [2024-04-26 15:10:05.065747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.405 [2024-04-26 15:10:05.065923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.405 [2024-04-26 15:10:05.065951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.405 [2024-04-26 15:10:05.065968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.405 [2024-04-26 15:10:05.066213] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.405 [2024-04-26 15:10:05.066454] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.405 [2024-04-26 15:10:05.066477] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.405 [2024-04-26 15:10:05.066493] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.405 [2024-04-26 15:10:05.070044] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.405 [2024-04-26 15:10:05.079242] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.405 [2024-04-26 15:10:05.079653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.405 [2024-04-26 15:10:05.079821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.405 [2024-04-26 15:10:05.079870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.405 [2024-04-26 15:10:05.079888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.405 [2024-04-26 15:10:05.080141] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.405 [2024-04-26 15:10:05.080383] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.405 [2024-04-26 15:10:05.080406] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.405 [2024-04-26 15:10:05.080421] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.405 [2024-04-26 15:10:05.083963] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.405 [2024-04-26 15:10:05.093170] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.405 [2024-04-26 15:10:05.093577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.405 [2024-04-26 15:10:05.093767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.405 [2024-04-26 15:10:05.093817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.405 [2024-04-26 15:10:05.093834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.405 [2024-04-26 15:10:05.094083] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.405 [2024-04-26 15:10:05.094323] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.405 [2024-04-26 15:10:05.094347] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.405 [2024-04-26 15:10:05.094362] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.405 [2024-04-26 15:10:05.097902] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.405 [2024-04-26 15:10:05.107112] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.405 [2024-04-26 15:10:05.107522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.405 [2024-04-26 15:10:05.107726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.405 [2024-04-26 15:10:05.107778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.405 [2024-04-26 15:10:05.107795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.405 [2024-04-26 15:10:05.108043] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.405 [2024-04-26 15:10:05.108284] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.405 [2024-04-26 15:10:05.108307] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.405 [2024-04-26 15:10:05.108322] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.405 [2024-04-26 15:10:05.111862] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.405 [2024-04-26 15:10:05.121068] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.405 [2024-04-26 15:10:05.121457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.405 [2024-04-26 15:10:05.121605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.406 [2024-04-26 15:10:05.121633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.406 [2024-04-26 15:10:05.121650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.406 [2024-04-26 15:10:05.121887] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.406 [2024-04-26 15:10:05.122144] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.406 [2024-04-26 15:10:05.122169] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.406 [2024-04-26 15:10:05.122184] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.665 [2024-04-26 15:10:05.125722] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.665 [2024-04-26 15:10:05.134936] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.665 [2024-04-26 15:10:05.135325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.665 [2024-04-26 15:10:05.135468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.665 [2024-04-26 15:10:05.135497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.665 [2024-04-26 15:10:05.135514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.665 [2024-04-26 15:10:05.135751] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.665 [2024-04-26 15:10:05.135992] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.665 [2024-04-26 15:10:05.136015] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.665 [2024-04-26 15:10:05.136038] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.665 [2024-04-26 15:10:05.139582] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.665 [2024-04-26 15:10:05.148851] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.665 [2024-04-26 15:10:05.149283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.665 [2024-04-26 15:10:05.149406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.665 [2024-04-26 15:10:05.149430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.665 [2024-04-26 15:10:05.149459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.665 [2024-04-26 15:10:05.149659] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.665 [2024-04-26 15:10:05.149905] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.665 [2024-04-26 15:10:05.149929] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.665 [2024-04-26 15:10:05.149944] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.665 [2024-04-26 15:10:05.153452] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.665 [2024-04-26 15:10:05.162580] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.665 [2024-04-26 15:10:05.162962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.665 [2024-04-26 15:10:05.163136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.665 [2024-04-26 15:10:05.163163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.665 [2024-04-26 15:10:05.163178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.665 [2024-04-26 15:10:05.163418] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.666 [2024-04-26 15:10:05.163659] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.666 [2024-04-26 15:10:05.163687] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.666 [2024-04-26 15:10:05.163703] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.666 [2024-04-26 15:10:05.167283] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.666 [2024-04-26 15:10:05.176573] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.666 [2024-04-26 15:10:05.176965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.666 [2024-04-26 15:10:05.177127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.666 [2024-04-26 15:10:05.177153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.666 [2024-04-26 15:10:05.177169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.666 [2024-04-26 15:10:05.177413] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.666 [2024-04-26 15:10:05.177655] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.666 [2024-04-26 15:10:05.177678] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.666 [2024-04-26 15:10:05.177694] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.666 [2024-04-26 15:10:05.181254] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.666 [2024-04-26 15:10:05.190483] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.666 [2024-04-26 15:10:05.190888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.666 [2024-04-26 15:10:05.191039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.666 [2024-04-26 15:10:05.191082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.666 [2024-04-26 15:10:05.191098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.666 [2024-04-26 15:10:05.191329] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.666 [2024-04-26 15:10:05.191570] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.666 [2024-04-26 15:10:05.191593] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.666 [2024-04-26 15:10:05.191608] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.666 [2024-04-26 15:10:05.195335] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.666 [2024-04-26 15:10:05.204472] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.666 [2024-04-26 15:10:05.204880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.666 [2024-04-26 15:10:05.205040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.666 [2024-04-26 15:10:05.205070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.666 [2024-04-26 15:10:05.205088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.666 [2024-04-26 15:10:05.205325] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.666 [2024-04-26 15:10:05.205566] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.666 [2024-04-26 15:10:05.205590] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.666 [2024-04-26 15:10:05.205611] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.666 [2024-04-26 15:10:05.209162] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.666 [2024-04-26 15:10:05.218374] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.666 [2024-04-26 15:10:05.218799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.666 [2024-04-26 15:10:05.219005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.666 [2024-04-26 15:10:05.219041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.666 [2024-04-26 15:10:05.219059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.666 [2024-04-26 15:10:05.219295] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.666 [2024-04-26 15:10:05.219537] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.666 [2024-04-26 15:10:05.219560] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.666 [2024-04-26 15:10:05.219575] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.666 [2024-04-26 15:10:05.223193] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.666 [2024-04-26 15:10:05.232282] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.666 [2024-04-26 15:10:05.232720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.666 [2024-04-26 15:10:05.232915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.666 [2024-04-26 15:10:05.232944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.666 [2024-04-26 15:10:05.232961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.666 [2024-04-26 15:10:05.233207] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.666 [2024-04-26 15:10:05.233454] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.666 [2024-04-26 15:10:05.233479] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.666 [2024-04-26 15:10:05.233494] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.666 [2024-04-26 15:10:05.237086] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.666 [2024-04-26 15:10:05.246135] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.666 [2024-04-26 15:10:05.246546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.666 [2024-04-26 15:10:05.246726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.666 [2024-04-26 15:10:05.246774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.666 [2024-04-26 15:10:05.246792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.666 [2024-04-26 15:10:05.247037] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.666 [2024-04-26 15:10:05.247265] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.666 [2024-04-26 15:10:05.247286] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.666 [2024-04-26 15:10:05.247316] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.666 [2024-04-26 15:10:05.250802] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.666 [2024-04-26 15:10:05.260113] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.666 [2024-04-26 15:10:05.260514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.666 [2024-04-26 15:10:05.260684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.666 [2024-04-26 15:10:05.260733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.666 [2024-04-26 15:10:05.260750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.666 [2024-04-26 15:10:05.260987] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.666 [2024-04-26 15:10:05.261225] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.666 [2024-04-26 15:10:05.261246] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.666 [2024-04-26 15:10:05.261259] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.666 [2024-04-26 15:10:05.264771] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.666 [2024-04-26 15:10:05.273937] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.666 [2024-04-26 15:10:05.274360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.666 [2024-04-26 15:10:05.274555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.666 [2024-04-26 15:10:05.274603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.666 [2024-04-26 15:10:05.274620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.666 [2024-04-26 15:10:05.274856] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.666 [2024-04-26 15:10:05.275107] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.666 [2024-04-26 15:10:05.275132] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.666 [2024-04-26 15:10:05.275147] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.666 [2024-04-26 15:10:05.278684] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.666 [2024-04-26 15:10:05.287880] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.666 [2024-04-26 15:10:05.288267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.666 [2024-04-26 15:10:05.288452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.666 [2024-04-26 15:10:05.288503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.666 [2024-04-26 15:10:05.288520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.666 [2024-04-26 15:10:05.288756] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.666 [2024-04-26 15:10:05.288998] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.666 [2024-04-26 15:10:05.289029] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.666 [2024-04-26 15:10:05.289046] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.667 [2024-04-26 15:10:05.292591] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.667 [2024-04-26 15:10:05.301792] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.667 [2024-04-26 15:10:05.302230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.667 [2024-04-26 15:10:05.302437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.667 [2024-04-26 15:10:05.302487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.667 [2024-04-26 15:10:05.302504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.667 [2024-04-26 15:10:05.302741] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.667 [2024-04-26 15:10:05.302981] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.667 [2024-04-26 15:10:05.303004] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.667 [2024-04-26 15:10:05.303028] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.667 [2024-04-26 15:10:05.306573] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.667 [2024-04-26 15:10:05.315769] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.667 [2024-04-26 15:10:05.316187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.667 [2024-04-26 15:10:05.316354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.667 [2024-04-26 15:10:05.316407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.667 [2024-04-26 15:10:05.316424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.667 [2024-04-26 15:10:05.316661] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.667 [2024-04-26 15:10:05.316901] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.667 [2024-04-26 15:10:05.316924] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.667 [2024-04-26 15:10:05.316940] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.667 [2024-04-26 15:10:05.320490] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.667 [2024-04-26 15:10:05.329710] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.667 [2024-04-26 15:10:05.330127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.667 [2024-04-26 15:10:05.330342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.667 [2024-04-26 15:10:05.330392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.667 [2024-04-26 15:10:05.330409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.667 [2024-04-26 15:10:05.330646] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.667 [2024-04-26 15:10:05.330886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.667 [2024-04-26 15:10:05.330909] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.667 [2024-04-26 15:10:05.330924] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.667 [2024-04-26 15:10:05.334476] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.667 [2024-04-26 15:10:05.343670] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.667 [2024-04-26 15:10:05.344080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.667 [2024-04-26 15:10:05.344237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.667 [2024-04-26 15:10:05.344266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.667 [2024-04-26 15:10:05.344283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.667 [2024-04-26 15:10:05.344521] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.667 [2024-04-26 15:10:05.344762] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.667 [2024-04-26 15:10:05.344785] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.667 [2024-04-26 15:10:05.344800] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.667 [2024-04-26 15:10:05.348352] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.667 [2024-04-26 15:10:05.357554] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.667 [2024-04-26 15:10:05.357910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.667 [2024-04-26 15:10:05.358100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.667 [2024-04-26 15:10:05.358158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.667 [2024-04-26 15:10:05.358175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.667 [2024-04-26 15:10:05.358412] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.667 [2024-04-26 15:10:05.358654] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.667 [2024-04-26 15:10:05.358678] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.667 [2024-04-26 15:10:05.358693] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.667 [2024-04-26 15:10:05.362243] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.667 [2024-04-26 15:10:05.371443] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.667 [2024-04-26 15:10:05.371855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.667 [2024-04-26 15:10:05.371974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.667 [2024-04-26 15:10:05.372002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.667 [2024-04-26 15:10:05.372028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.667 [2024-04-26 15:10:05.372267] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.667 [2024-04-26 15:10:05.372508] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.667 [2024-04-26 15:10:05.372531] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.667 [2024-04-26 15:10:05.372547] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.667 [2024-04-26 15:10:05.376096] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.667 [2024-04-26 15:10:05.385291] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.667 [2024-04-26 15:10:05.385692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.667 [2024-04-26 15:10:05.385848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.667 [2024-04-26 15:10:05.385877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.667 [2024-04-26 15:10:05.385894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.667 [2024-04-26 15:10:05.386141] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.667 [2024-04-26 15:10:05.386383] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.667 [2024-04-26 15:10:05.386406] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.667 [2024-04-26 15:10:05.386421] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.667 [2024-04-26 15:10:05.389962] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.667 [2024-04-26 15:10:05.399166] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.667 [2024-04-26 15:10:05.399547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.667 [2024-04-26 15:10:05.399734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.667 [2024-04-26 15:10:05.399785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.667 [2024-04-26 15:10:05.399801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.667 [2024-04-26 15:10:05.400049] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.667 [2024-04-26 15:10:05.400291] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.667 [2024-04-26 15:10:05.400314] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.667 [2024-04-26 15:10:05.400329] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.667 [2024-04-26 15:10:05.403869] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.926 [2024-04-26 15:10:05.413073] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.926 [2024-04-26 15:10:05.413488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-04-26 15:10:05.413648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-04-26 15:10:05.413697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.926 [2024-04-26 15:10:05.413714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.926 [2024-04-26 15:10:05.413951] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.926 [2024-04-26 15:10:05.414202] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.926 [2024-04-26 15:10:05.414227] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.926 [2024-04-26 15:10:05.414242] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.926 [2024-04-26 15:10:05.417784] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.926 [2024-04-26 15:10:05.426990] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.926 [2024-04-26 15:10:05.427381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-04-26 15:10:05.427528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-04-26 15:10:05.427556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.926 [2024-04-26 15:10:05.427581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.926 [2024-04-26 15:10:05.427818] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.926 [2024-04-26 15:10:05.428068] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.926 [2024-04-26 15:10:05.428093] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.926 [2024-04-26 15:10:05.428108] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.926 [2024-04-26 15:10:05.431648] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.926 [2024-04-26 15:10:05.440851] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.926 [2024-04-26 15:10:05.441241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-04-26 15:10:05.441387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.926 [2024-04-26 15:10:05.441415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.926 [2024-04-26 15:10:05.441432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.926 [2024-04-26 15:10:05.441669] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.926 [2024-04-26 15:10:05.441910] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.926 [2024-04-26 15:10:05.441934] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.926 [2024-04-26 15:10:05.441949] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.926 [2024-04-26 15:10:05.445500] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.926 [2024-04-26 15:10:05.454703] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.926 [2024-04-26 15:10:05.455189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.455300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.455328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.927 [2024-04-26 15:10:05.455346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.927 [2024-04-26 15:10:05.455583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.927 [2024-04-26 15:10:05.455824] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.927 [2024-04-26 15:10:05.455847] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.927 [2024-04-26 15:10:05.455862] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.927 [2024-04-26 15:10:05.459409] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.927 [2024-04-26 15:10:05.468617] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.927 [2024-04-26 15:10:05.469069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.469203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.469231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.927 [2024-04-26 15:10:05.469249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.927 [2024-04-26 15:10:05.469493] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.927 [2024-04-26 15:10:05.469745] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.927 [2024-04-26 15:10:05.469768] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.927 [2024-04-26 15:10:05.469783] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.927 [2024-04-26 15:10:05.473332] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.927 [2024-04-26 15:10:05.482571] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.927 [2024-04-26 15:10:05.482965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.483127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.483157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.927 [2024-04-26 15:10:05.483174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.927 [2024-04-26 15:10:05.483410] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.927 [2024-04-26 15:10:05.483652] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.927 [2024-04-26 15:10:05.483675] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.927 [2024-04-26 15:10:05.483690] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.927 [2024-04-26 15:10:05.487239] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.927 [2024-04-26 15:10:05.496444] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.927 [2024-04-26 15:10:05.496901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.497051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.497081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.927 [2024-04-26 15:10:05.497098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.927 [2024-04-26 15:10:05.497335] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.927 [2024-04-26 15:10:05.497576] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.927 [2024-04-26 15:10:05.497600] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.927 [2024-04-26 15:10:05.497615] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.927 [2024-04-26 15:10:05.501166] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.927 [2024-04-26 15:10:05.510366] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.927 [2024-04-26 15:10:05.510838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.511033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.511062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.927 [2024-04-26 15:10:05.511080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.927 [2024-04-26 15:10:05.511316] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.927 [2024-04-26 15:10:05.511563] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.927 [2024-04-26 15:10:05.511587] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.927 [2024-04-26 15:10:05.511603] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.927 [2024-04-26 15:10:05.515150] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.927 [2024-04-26 15:10:05.524352] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.927 [2024-04-26 15:10:05.524791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.524964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.525001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.927 [2024-04-26 15:10:05.525027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.927 [2024-04-26 15:10:05.525267] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.927 [2024-04-26 15:10:05.525513] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.927 [2024-04-26 15:10:05.525537] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.927 [2024-04-26 15:10:05.525552] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.927 [2024-04-26 15:10:05.529108] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.927 [2024-04-26 15:10:05.538315] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.927 [2024-04-26 15:10:05.538772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.538928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.538956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.927 [2024-04-26 15:10:05.538973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.927 [2024-04-26 15:10:05.539219] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.927 [2024-04-26 15:10:05.539460] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.927 [2024-04-26 15:10:05.539484] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.927 [2024-04-26 15:10:05.539499] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.927 [2024-04-26 15:10:05.543058] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.927 [2024-04-26 15:10:05.552284] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.927 [2024-04-26 15:10:05.552730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.552863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.552912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.927 [2024-04-26 15:10:05.552929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.927 [2024-04-26 15:10:05.553175] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.927 [2024-04-26 15:10:05.553416] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.927 [2024-04-26 15:10:05.553445] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.927 [2024-04-26 15:10:05.553461] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.927 [2024-04-26 15:10:05.557010] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.927 [2024-04-26 15:10:05.566228] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.927 [2024-04-26 15:10:05.566678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.566846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.566874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.927 [2024-04-26 15:10:05.566891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.927 [2024-04-26 15:10:05.567138] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.927 [2024-04-26 15:10:05.567380] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.927 [2024-04-26 15:10:05.567403] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.927 [2024-04-26 15:10:05.567418] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.927 [2024-04-26 15:10:05.570958] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.927 [2024-04-26 15:10:05.580185] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.927 [2024-04-26 15:10:05.580666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.580807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.927 [2024-04-26 15:10:05.580836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.927 [2024-04-26 15:10:05.580853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.927 [2024-04-26 15:10:05.581107] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.928 [2024-04-26 15:10:05.581349] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.928 [2024-04-26 15:10:05.581372] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.928 [2024-04-26 15:10:05.581387] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.928 [2024-04-26 15:10:05.584927] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.928 [2024-04-26 15:10:05.594132] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.928 [2024-04-26 15:10:05.594551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-04-26 15:10:05.594727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-04-26 15:10:05.594779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.928 [2024-04-26 15:10:05.594796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.928 [2024-04-26 15:10:05.595048] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.928 [2024-04-26 15:10:05.595290] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.928 [2024-04-26 15:10:05.595313] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.928 [2024-04-26 15:10:05.595333] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.928 [2024-04-26 15:10:05.598874] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.928 [2024-04-26 15:10:05.608079] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.928 [2024-04-26 15:10:05.608458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-04-26 15:10:05.608660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-04-26 15:10:05.608711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.928 [2024-04-26 15:10:05.608728] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.928 [2024-04-26 15:10:05.608965] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.928 [2024-04-26 15:10:05.609216] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.928 [2024-04-26 15:10:05.609241] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.928 [2024-04-26 15:10:05.609256] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.928 [2024-04-26 15:10:05.612795] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.928 [2024-04-26 15:10:05.621988] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.928 [2024-04-26 15:10:05.622487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-04-26 15:10:05.622695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-04-26 15:10:05.622723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.928 [2024-04-26 15:10:05.622740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.928 [2024-04-26 15:10:05.622977] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.928 [2024-04-26 15:10:05.623228] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.928 [2024-04-26 15:10:05.623252] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.928 [2024-04-26 15:10:05.623267] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.928 [2024-04-26 15:10:05.626822] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.928 [2024-04-26 15:10:05.635808] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.928 [2024-04-26 15:10:05.636190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-04-26 15:10:05.636398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-04-26 15:10:05.636449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.928 [2024-04-26 15:10:05.636467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.928 [2024-04-26 15:10:05.636703] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.928 [2024-04-26 15:10:05.636944] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.928 [2024-04-26 15:10:05.636968] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.928 [2024-04-26 15:10:05.636983] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.928 [2024-04-26 15:10:05.640539] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.928 [2024-04-26 15:10:05.649731] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.928 [2024-04-26 15:10:05.650217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-04-26 15:10:05.650368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-04-26 15:10:05.650417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.928 [2024-04-26 15:10:05.650434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.928 [2024-04-26 15:10:05.650671] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.928 [2024-04-26 15:10:05.650912] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.928 [2024-04-26 15:10:05.650936] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.928 [2024-04-26 15:10:05.650951] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.928 [2024-04-26 15:10:05.654506] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.928 [2024-04-26 15:10:05.663710] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.928 [2024-04-26 15:10:05.664151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-04-26 15:10:05.664303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.928 [2024-04-26 15:10:05.664332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:19.928 [2024-04-26 15:10:05.664349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:19.928 [2024-04-26 15:10:05.664586] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:19.928 [2024-04-26 15:10:05.664828] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.928 [2024-04-26 15:10:05.664852] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.928 [2024-04-26 15:10:05.664867] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.187 [2024-04-26 15:10:05.668426] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.187 [2024-04-26 15:10:05.677660] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.187 [2024-04-26 15:10:05.678089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.187 [2024-04-26 15:10:05.678337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.187 [2024-04-26 15:10:05.678396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.187 [2024-04-26 15:10:05.678413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.187 [2024-04-26 15:10:05.678649] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.187 [2024-04-26 15:10:05.678891] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.187 [2024-04-26 15:10:05.678914] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.187 [2024-04-26 15:10:05.678929] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.187 [2024-04-26 15:10:05.682485] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.187 [2024-04-26 15:10:05.691483] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.187 [2024-04-26 15:10:05.691977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.187 [2024-04-26 15:10:05.692146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.187 [2024-04-26 15:10:05.692176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.187 [2024-04-26 15:10:05.692193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.187 [2024-04-26 15:10:05.692429] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.187 [2024-04-26 15:10:05.692670] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.187 [2024-04-26 15:10:05.692694] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.187 [2024-04-26 15:10:05.692709] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.187 [2024-04-26 15:10:05.696260] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.187 [2024-04-26 15:10:05.705465] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.188 [2024-04-26 15:10:05.705917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.706061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.706090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.188 [2024-04-26 15:10:05.706107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.188 [2024-04-26 15:10:05.706344] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.188 [2024-04-26 15:10:05.706585] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.188 [2024-04-26 15:10:05.706608] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.188 [2024-04-26 15:10:05.706623] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.188 [2024-04-26 15:10:05.710175] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.188 [2024-04-26 15:10:05.719387] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.188 [2024-04-26 15:10:05.719861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.720011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.720050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.188 [2024-04-26 15:10:05.720068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.188 [2024-04-26 15:10:05.720305] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.188 [2024-04-26 15:10:05.720546] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.188 [2024-04-26 15:10:05.720569] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.188 [2024-04-26 15:10:05.720584] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.188 [2024-04-26 15:10:05.724141] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.188 [2024-04-26 15:10:05.733359] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.188 [2024-04-26 15:10:05.733815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.733968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.733997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.188 [2024-04-26 15:10:05.734014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.188 [2024-04-26 15:10:05.734261] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.188 [2024-04-26 15:10:05.734502] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.188 [2024-04-26 15:10:05.734525] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.188 [2024-04-26 15:10:05.734540] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.188 [2024-04-26 15:10:05.738096] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.188 [2024-04-26 15:10:05.747244] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.188 [2024-04-26 15:10:05.747761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.748085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.748115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.188 [2024-04-26 15:10:05.748132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.188 [2024-04-26 15:10:05.748369] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.188 [2024-04-26 15:10:05.748610] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.188 [2024-04-26 15:10:05.748633] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.188 [2024-04-26 15:10:05.748647] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.188 [2024-04-26 15:10:05.752210] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.188 [2024-04-26 15:10:05.761217] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.188 [2024-04-26 15:10:05.761723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.761937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.761962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.188 [2024-04-26 15:10:05.761983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.188 [2024-04-26 15:10:05.762220] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.188 [2024-04-26 15:10:05.762460] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.188 [2024-04-26 15:10:05.762480] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.188 [2024-04-26 15:10:05.762492] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.188 [2024-04-26 15:10:05.765861] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.188 [2024-04-26 15:10:05.775110] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.188 [2024-04-26 15:10:05.775621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.775885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.775936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.188 [2024-04-26 15:10:05.775953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.188 [2024-04-26 15:10:05.776199] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.188 [2024-04-26 15:10:05.776446] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.188 [2024-04-26 15:10:05.776470] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.188 [2024-04-26 15:10:05.776485] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.188 [2024-04-26 15:10:05.780032] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.188 [2024-04-26 15:10:05.788993] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.188 [2024-04-26 15:10:05.789396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.789600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.789659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.188 [2024-04-26 15:10:05.789676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.188 [2024-04-26 15:10:05.789913] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.188 [2024-04-26 15:10:05.790168] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.188 [2024-04-26 15:10:05.790190] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.188 [2024-04-26 15:10:05.790204] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.188 [2024-04-26 15:10:05.793808] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.188 [2024-04-26 15:10:05.802895] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.188 [2024-04-26 15:10:05.803433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.803668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.803720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.188 [2024-04-26 15:10:05.803737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.188 [2024-04-26 15:10:05.803978] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.188 [2024-04-26 15:10:05.804233] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.188 [2024-04-26 15:10:05.804258] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.188 [2024-04-26 15:10:05.804273] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.188 [2024-04-26 15:10:05.807840] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.188 [2024-04-26 15:10:05.816842] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.188 [2024-04-26 15:10:05.817315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.817574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.817603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.188 [2024-04-26 15:10:05.817626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.188 [2024-04-26 15:10:05.817864] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.188 [2024-04-26 15:10:05.818119] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.188 [2024-04-26 15:10:05.818144] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.188 [2024-04-26 15:10:05.818159] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.188 [2024-04-26 15:10:05.821701] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.188 [2024-04-26 15:10:05.830700] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.188 [2024-04-26 15:10:05.831174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.831396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.188 [2024-04-26 15:10:05.831425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.188 [2024-04-26 15:10:05.831442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.189 [2024-04-26 15:10:05.831678] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.189 [2024-04-26 15:10:05.831919] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.189 [2024-04-26 15:10:05.831943] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.189 [2024-04-26 15:10:05.831957] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.189 [2024-04-26 15:10:05.835511] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.189 [2024-04-26 15:10:05.844506] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.189 [2024-04-26 15:10:05.844992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.189 [2024-04-26 15:10:05.845210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.189 [2024-04-26 15:10:05.845239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.189 [2024-04-26 15:10:05.845256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.189 [2024-04-26 15:10:05.845493] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.189 [2024-04-26 15:10:05.845734] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.189 [2024-04-26 15:10:05.845757] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.189 [2024-04-26 15:10:05.845772] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.189 [2024-04-26 15:10:05.849327] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.189 [2024-04-26 15:10:05.858330] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.189 [2024-04-26 15:10:05.858815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.189 [2024-04-26 15:10:05.858945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.189 [2024-04-26 15:10:05.858973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.189 [2024-04-26 15:10:05.858990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.189 [2024-04-26 15:10:05.859253] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.189 [2024-04-26 15:10:05.859496] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.189 [2024-04-26 15:10:05.859519] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.189 [2024-04-26 15:10:05.859534] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.189 [2024-04-26 15:10:05.863083] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.189 [2024-04-26 15:10:05.872285] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.189 [2024-04-26 15:10:05.872812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.189 [2024-04-26 15:10:05.873001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.189 [2024-04-26 15:10:05.873048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.189 [2024-04-26 15:10:05.873068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.189 [2024-04-26 15:10:05.873304] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.189 [2024-04-26 15:10:05.873546] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.189 [2024-04-26 15:10:05.873569] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.189 [2024-04-26 15:10:05.873584] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.189 [2024-04-26 15:10:05.877135] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.189 [2024-04-26 15:10:05.886133] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.189 [2024-04-26 15:10:05.886642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.189 [2024-04-26 15:10:05.886896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.189 [2024-04-26 15:10:05.886944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.189 [2024-04-26 15:10:05.886962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.189 [2024-04-26 15:10:05.887210] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.189 [2024-04-26 15:10:05.887451] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.189 [2024-04-26 15:10:05.887474] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.189 [2024-04-26 15:10:05.887490] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.189 [2024-04-26 15:10:05.891041] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.189 [2024-04-26 15:10:05.900050] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.189 [2024-04-26 15:10:05.900514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.189 [2024-04-26 15:10:05.900807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.189 [2024-04-26 15:10:05.900858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.189 [2024-04-26 15:10:05.900875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.189 [2024-04-26 15:10:05.901141] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.189 [2024-04-26 15:10:05.901383] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.189 [2024-04-26 15:10:05.901406] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.189 [2024-04-26 15:10:05.901421] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.189 [2024-04-26 15:10:05.904963] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.189 [2024-04-26 15:10:05.913958] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.189 [2024-04-26 15:10:05.914430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.189 [2024-04-26 15:10:05.914693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.189 [2024-04-26 15:10:05.914721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.189 [2024-04-26 15:10:05.914738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.189 [2024-04-26 15:10:05.914975] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.189 [2024-04-26 15:10:05.915226] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.189 [2024-04-26 15:10:05.915251] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.189 [2024-04-26 15:10:05.915266] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.189 [2024-04-26 15:10:05.918809] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.448 [2024-04-26 15:10:05.927805] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.448 [2024-04-26 15:10:05.928312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.448 [2024-04-26 15:10:05.928589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.448 [2024-04-26 15:10:05.928642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.448 [2024-04-26 15:10:05.928659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.448 [2024-04-26 15:10:05.928895] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.448 [2024-04-26 15:10:05.929147] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.448 [2024-04-26 15:10:05.929171] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.448 [2024-04-26 15:10:05.929187] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.448 [2024-04-26 15:10:05.932732] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.448 [2024-04-26 15:10:05.941732] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.448 [2024-04-26 15:10:05.942240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.448 [2024-04-26 15:10:05.942485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.448 [2024-04-26 15:10:05.942534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.448 [2024-04-26 15:10:05.942552] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.448 [2024-04-26 15:10:05.942789] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.448 [2024-04-26 15:10:05.943047] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.448 [2024-04-26 15:10:05.943071] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.448 [2024-04-26 15:10:05.943086] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.448 [2024-04-26 15:10:05.946629] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.448 [2024-04-26 15:10:05.955625] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.448 [2024-04-26 15:10:05.956129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.448 [2024-04-26 15:10:05.956395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.448 [2024-04-26 15:10:05.956447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.448 [2024-04-26 15:10:05.956464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.448 [2024-04-26 15:10:05.956700] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.448 [2024-04-26 15:10:05.956941] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.448 [2024-04-26 15:10:05.956964] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.448 [2024-04-26 15:10:05.956978] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.448 [2024-04-26 15:10:05.960529] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.448 [2024-04-26 15:10:05.969520] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.448 [2024-04-26 15:10:05.970003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.448 [2024-04-26 15:10:05.970173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.448 [2024-04-26 15:10:05.970202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.448 [2024-04-26 15:10:05.970220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.448 [2024-04-26 15:10:05.970456] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.448 [2024-04-26 15:10:05.970696] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.448 [2024-04-26 15:10:05.970720] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.448 [2024-04-26 15:10:05.970734] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.448 [2024-04-26 15:10:05.974294] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.448 [2024-04-26 15:10:05.983520] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.448 [2024-04-26 15:10:05.983943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.448 [2024-04-26 15:10:05.984115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.448 [2024-04-26 15:10:05.984144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.448 [2024-04-26 15:10:05.984162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.448 [2024-04-26 15:10:05.984399] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.448 [2024-04-26 15:10:05.984639] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.448 [2024-04-26 15:10:05.984662] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.448 [2024-04-26 15:10:05.984683] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.448 [2024-04-26 15:10:05.988241] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.448 [2024-04-26 15:10:05.997450] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.448 [2024-04-26 15:10:05.997951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.448 [2024-04-26 15:10:05.998177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.448 [2024-04-26 15:10:05.998207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.448 [2024-04-26 15:10:05.998224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.448 [2024-04-26 15:10:05.998461] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.448 [2024-04-26 15:10:05.998702] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.448 [2024-04-26 15:10:05.998727] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.448 [2024-04-26 15:10:05.998742] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.448 [2024-04-26 15:10:06.002300] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.448 [2024-04-26 15:10:06.011306] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.448 [2024-04-26 15:10:06.011827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.448 [2024-04-26 15:10:06.012111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.448 [2024-04-26 15:10:06.012140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.448 [2024-04-26 15:10:06.012157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.448 [2024-04-26 15:10:06.012393] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.448 [2024-04-26 15:10:06.012634] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.448 [2024-04-26 15:10:06.012658] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.448 [2024-04-26 15:10:06.012672] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.448 [2024-04-26 15:10:06.016226] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.448 [2024-04-26 15:10:06.025223] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.448 [2024-04-26 15:10:06.025710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.448 [2024-04-26 15:10:06.026008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.448 [2024-04-26 15:10:06.026070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.448 [2024-04-26 15:10:06.026088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.448 [2024-04-26 15:10:06.026325] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.448 [2024-04-26 15:10:06.026565] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.448 [2024-04-26 15:10:06.026588] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.448 [2024-04-26 15:10:06.026608] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.448 [2024-04-26 15:10:06.030160] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.448 [2024-04-26 15:10:06.039155] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.448 [2024-04-26 15:10:06.039666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.448 [2024-04-26 15:10:06.039945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.448 [2024-04-26 15:10:06.039994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.449 [2024-04-26 15:10:06.040012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.449 [2024-04-26 15:10:06.040262] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.449 [2024-04-26 15:10:06.040503] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.449 [2024-04-26 15:10:06.040526] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.449 [2024-04-26 15:10:06.040541] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.449 [2024-04-26 15:10:06.044093] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.449 [2024-04-26 15:10:06.053091] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.449 [2024-04-26 15:10:06.053575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.053835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.053884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.449 [2024-04-26 15:10:06.053902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.449 [2024-04-26 15:10:06.054151] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.449 [2024-04-26 15:10:06.054392] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.449 [2024-04-26 15:10:06.054416] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.449 [2024-04-26 15:10:06.054430] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.449 [2024-04-26 15:10:06.057975] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.449 [2024-04-26 15:10:06.066966] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.449 [2024-04-26 15:10:06.067482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.067759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.067806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.449 [2024-04-26 15:10:06.067823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.449 [2024-04-26 15:10:06.068073] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.449 [2024-04-26 15:10:06.068315] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.449 [2024-04-26 15:10:06.068338] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.449 [2024-04-26 15:10:06.068353] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.449 [2024-04-26 15:10:06.071896] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.449 [2024-04-26 15:10:06.080902] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.449 [2024-04-26 15:10:06.081419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.081695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.081744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.449 [2024-04-26 15:10:06.081761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.449 [2024-04-26 15:10:06.081997] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.449 [2024-04-26 15:10:06.082250] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.449 [2024-04-26 15:10:06.082274] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.449 [2024-04-26 15:10:06.082289] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.449 [2024-04-26 15:10:06.085832] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.449 [2024-04-26 15:10:06.094826] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.449 [2024-04-26 15:10:06.095334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.095574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.095626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.449 [2024-04-26 15:10:06.095643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.449 [2024-04-26 15:10:06.095880] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.449 [2024-04-26 15:10:06.096134] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.449 [2024-04-26 15:10:06.096158] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.449 [2024-04-26 15:10:06.096173] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.449 [2024-04-26 15:10:06.099715] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.449 [2024-04-26 15:10:06.108717] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.449 [2024-04-26 15:10:06.109216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.109485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.109513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.449 [2024-04-26 15:10:06.109531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.449 [2024-04-26 15:10:06.109767] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.449 [2024-04-26 15:10:06.110007] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.449 [2024-04-26 15:10:06.110041] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.449 [2024-04-26 15:10:06.110057] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.449 [2024-04-26 15:10:06.113602] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.449 [2024-04-26 15:10:06.122595] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.449 [2024-04-26 15:10:06.123081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.123336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.123374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.449 [2024-04-26 15:10:06.123391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.449 [2024-04-26 15:10:06.123628] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.449 [2024-04-26 15:10:06.123869] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.449 [2024-04-26 15:10:06.123892] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.449 [2024-04-26 15:10:06.123907] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.449 [2024-04-26 15:10:06.127464] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.449 [2024-04-26 15:10:06.136459] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.449 [2024-04-26 15:10:06.136962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.137265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.137303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.449 [2024-04-26 15:10:06.137320] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.449 [2024-04-26 15:10:06.137557] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.449 [2024-04-26 15:10:06.137797] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.449 [2024-04-26 15:10:06.137821] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.449 [2024-04-26 15:10:06.137835] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.449 [2024-04-26 15:10:06.141389] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.449 [2024-04-26 15:10:06.150387] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.449 [2024-04-26 15:10:06.150883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.151166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.151195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.449 [2024-04-26 15:10:06.151212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.449 [2024-04-26 15:10:06.151449] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.449 [2024-04-26 15:10:06.151689] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.449 [2024-04-26 15:10:06.151713] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.449 [2024-04-26 15:10:06.151728] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.449 [2024-04-26 15:10:06.155293] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.449 [2024-04-26 15:10:06.164293] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.449 [2024-04-26 15:10:06.164790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.165013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.165052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.449 [2024-04-26 15:10:06.165070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.449 [2024-04-26 15:10:06.165306] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.449 [2024-04-26 15:10:06.165547] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.449 [2024-04-26 15:10:06.165571] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.449 [2024-04-26 15:10:06.165586] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.449 [2024-04-26 15:10:06.169138] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.449 [2024-04-26 15:10:06.178145] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.449 [2024-04-26 15:10:06.178618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.178819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.449 [2024-04-26 15:10:06.178871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.449 [2024-04-26 15:10:06.178888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.449 [2024-04-26 15:10:06.179138] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.449 [2024-04-26 15:10:06.179379] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.449 [2024-04-26 15:10:06.179403] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.449 [2024-04-26 15:10:06.179418] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.449 [2024-04-26 15:10:06.182960] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.707 [2024-04-26 15:10:06.191960] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.707 [2024-04-26 15:10:06.192451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-04-26 15:10:06.192732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-04-26 15:10:06.192782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.707 [2024-04-26 15:10:06.192799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.707 [2024-04-26 15:10:06.193047] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.707 [2024-04-26 15:10:06.193288] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.707 [2024-04-26 15:10:06.193311] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.707 [2024-04-26 15:10:06.193326] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.707 [2024-04-26 15:10:06.196869] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.708 [2024-04-26 15:10:06.205868] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.708 [2024-04-26 15:10:06.206395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-04-26 15:10:06.206620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-04-26 15:10:06.206671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.708 [2024-04-26 15:10:06.206694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.708 [2024-04-26 15:10:06.206932] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.708 [2024-04-26 15:10:06.207186] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.708 [2024-04-26 15:10:06.207210] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.708 [2024-04-26 15:10:06.207226] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.708 [2024-04-26 15:10:06.210770] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.708 [2024-04-26 15:10:06.219875] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.708 [2024-04-26 15:10:06.220391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-04-26 15:10:06.220713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-04-26 15:10:06.220761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.708 [2024-04-26 15:10:06.220778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.708 [2024-04-26 15:10:06.221015] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.708 [2024-04-26 15:10:06.221268] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.708 [2024-04-26 15:10:06.221292] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.708 [2024-04-26 15:10:06.221307] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.708 [2024-04-26 15:10:06.224855] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.708 [2024-04-26 15:10:06.233871] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.708 [2024-04-26 15:10:06.234360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-04-26 15:10:06.234603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-04-26 15:10:06.234656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.708 [2024-04-26 15:10:06.234673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.708 [2024-04-26 15:10:06.234908] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.708 [2024-04-26 15:10:06.235161] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.708 [2024-04-26 15:10:06.235186] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.708 [2024-04-26 15:10:06.235202] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.708 [2024-04-26 15:10:06.238745] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.708 [2024-04-26 15:10:06.247747] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.708 [2024-04-26 15:10:06.248214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-04-26 15:10:06.248449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-04-26 15:10:06.248499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.708 [2024-04-26 15:10:06.248521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.708 [2024-04-26 15:10:06.248759] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.708 [2024-04-26 15:10:06.249000] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.708 [2024-04-26 15:10:06.249031] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.708 [2024-04-26 15:10:06.249049] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.708 [2024-04-26 15:10:06.252604] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.708 [2024-04-26 15:10:06.261619] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.708 [2024-04-26 15:10:06.262027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-04-26 15:10:06.262186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-04-26 15:10:06.262215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.708 [2024-04-26 15:10:06.262232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.708 [2024-04-26 15:10:06.262468] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.708 [2024-04-26 15:10:06.262709] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.708 [2024-04-26 15:10:06.262732] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.708 [2024-04-26 15:10:06.262747] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.708 [2024-04-26 15:10:06.266296] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.708 [2024-04-26 15:10:06.275509] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.708 [2024-04-26 15:10:06.275937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-04-26 15:10:06.276119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-04-26 15:10:06.276149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.708 [2024-04-26 15:10:06.276166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.708 [2024-04-26 15:10:06.276403] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.708 [2024-04-26 15:10:06.276643] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.708 [2024-04-26 15:10:06.276667] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.708 [2024-04-26 15:10:06.276682] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.708 [2024-04-26 15:10:06.280234] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.708 [2024-04-26 15:10:06.289458] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.708 [2024-04-26 15:10:06.289861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-04-26 15:10:06.290036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-04-26 15:10:06.290069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.708 [2024-04-26 15:10:06.290086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.708 [2024-04-26 15:10:06.290328] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.708 [2024-04-26 15:10:06.290570] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.708 [2024-04-26 15:10:06.290593] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.708 [2024-04-26 15:10:06.290608] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.708 [2024-04-26 15:10:06.294165] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.708 [2024-04-26 15:10:06.303391] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.708 [2024-04-26 15:10:06.303806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-04-26 15:10:06.303974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-04-26 15:10:06.304002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.708 [2024-04-26 15:10:06.304027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.708 [2024-04-26 15:10:06.304267] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.708 [2024-04-26 15:10:06.304508] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.708 [2024-04-26 15:10:06.304532] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.708 [2024-04-26 15:10:06.304546] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.708 [2024-04-26 15:10:06.308107] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.708 [2024-04-26 15:10:06.317328] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.708 [2024-04-26 15:10:06.317779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-04-26 15:10:06.317937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-04-26 15:10:06.317965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.708 [2024-04-26 15:10:06.317982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.708 [2024-04-26 15:10:06.318229] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.708 [2024-04-26 15:10:06.318471] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.708 [2024-04-26 15:10:06.318494] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.708 [2024-04-26 15:10:06.318509] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.708 [2024-04-26 15:10:06.322064] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.708 [2024-04-26 15:10:06.331290] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.708 [2024-04-26 15:10:06.331711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-04-26 15:10:06.331864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-04-26 15:10:06.331893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.709 [2024-04-26 15:10:06.331910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.709 [2024-04-26 15:10:06.332158] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.709 [2024-04-26 15:10:06.332405] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.709 [2024-04-26 15:10:06.332430] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.709 [2024-04-26 15:10:06.332445] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.709 [2024-04-26 15:10:06.336017] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.709 [2024-04-26 15:10:06.345250] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.709 [2024-04-26 15:10:06.345755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-04-26 15:10:06.345915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-04-26 15:10:06.345943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.709 [2024-04-26 15:10:06.345964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.709 [2024-04-26 15:10:06.346209] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.709 [2024-04-26 15:10:06.346451] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.709 [2024-04-26 15:10:06.346474] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.709 [2024-04-26 15:10:06.346490] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.709 [2024-04-26 15:10:06.350045] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.709 [2024-04-26 15:10:06.359087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.709 [2024-04-26 15:10:06.359531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-04-26 15:10:06.359752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-04-26 15:10:06.359802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.709 [2024-04-26 15:10:06.359819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.709 [2024-04-26 15:10:06.360065] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.709 [2024-04-26 15:10:06.360306] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.709 [2024-04-26 15:10:06.360330] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.709 [2024-04-26 15:10:06.360345] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.709 [2024-04-26 15:10:06.363898] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.709 [2024-04-26 15:10:06.372923] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.709 [2024-04-26 15:10:06.373351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-04-26 15:10:06.373530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-04-26 15:10:06.373580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.709 [2024-04-26 15:10:06.373597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.709 [2024-04-26 15:10:06.373834] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.709 [2024-04-26 15:10:06.374088] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.709 [2024-04-26 15:10:06.374118] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.709 [2024-04-26 15:10:06.374134] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.709 [2024-04-26 15:10:06.377684] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.709 [2024-04-26 15:10:06.386909] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.709 [2024-04-26 15:10:06.387295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-04-26 15:10:06.387522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-04-26 15:10:06.387572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.709 [2024-04-26 15:10:06.387590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.709 [2024-04-26 15:10:06.387826] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.709 [2024-04-26 15:10:06.388080] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.709 [2024-04-26 15:10:06.388104] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.709 [2024-04-26 15:10:06.388119] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.709 [2024-04-26 15:10:06.391669] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.709 [2024-04-26 15:10:06.400894] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.709 [2024-04-26 15:10:06.401294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-04-26 15:10:06.401503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-04-26 15:10:06.401553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.709 [2024-04-26 15:10:06.401570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.709 [2024-04-26 15:10:06.401806] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.709 [2024-04-26 15:10:06.402059] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.709 [2024-04-26 15:10:06.402084] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.709 [2024-04-26 15:10:06.402099] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.709 [2024-04-26 15:10:06.405645] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.709 [2024-04-26 15:10:06.414874] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.709 [2024-04-26 15:10:06.415404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-04-26 15:10:06.415649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-04-26 15:10:06.415700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.709 [2024-04-26 15:10:06.415717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.709 [2024-04-26 15:10:06.415953] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.709 [2024-04-26 15:10:06.416202] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.709 [2024-04-26 15:10:06.416227] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.709 [2024-04-26 15:10:06.416246] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.709 [2024-04-26 15:10:06.419797] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.709 [2024-04-26 15:10:06.428802] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.709 [2024-04-26 15:10:06.429216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-04-26 15:10:06.429516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-04-26 15:10:06.429567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.709 [2024-04-26 15:10:06.429584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.709 [2024-04-26 15:10:06.429820] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.709 [2024-04-26 15:10:06.430072] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.709 [2024-04-26 15:10:06.430096] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.709 [2024-04-26 15:10:06.430111] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.709 [2024-04-26 15:10:06.433661] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.709 [2024-04-26 15:10:06.442667] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.709 [2024-04-26 15:10:06.443155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-04-26 15:10:06.443335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-04-26 15:10:06.443388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.709 [2024-04-26 15:10:06.443405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.709 [2024-04-26 15:10:06.443642] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.709 [2024-04-26 15:10:06.443883] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.709 [2024-04-26 15:10:06.443906] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.709 [2024-04-26 15:10:06.443921] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.968 [2024-04-26 15:10:06.447470] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.968 [2024-04-26 15:10:06.456476] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.968 [2024-04-26 15:10:06.456959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.968 [2024-04-26 15:10:06.457232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.968 [2024-04-26 15:10:06.457271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.968 [2024-04-26 15:10:06.457288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.968 [2024-04-26 15:10:06.457525] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.968 [2024-04-26 15:10:06.457765] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.968 [2024-04-26 15:10:06.457789] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.968 [2024-04-26 15:10:06.457804] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.968 [2024-04-26 15:10:06.461355] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.968 [2024-04-26 15:10:06.470355] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.968 [2024-04-26 15:10:06.470870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.968 [2024-04-26 15:10:06.471094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.968 [2024-04-26 15:10:06.471156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.968 [2024-04-26 15:10:06.471174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.968 [2024-04-26 15:10:06.471411] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.968 [2024-04-26 15:10:06.471652] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.968 [2024-04-26 15:10:06.471676] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.968 [2024-04-26 15:10:06.471691] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.968 [2024-04-26 15:10:06.475250] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.968 [2024-04-26 15:10:06.484266] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.968 [2024-04-26 15:10:06.484732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.968 [2024-04-26 15:10:06.484865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.968 [2024-04-26 15:10:06.484894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.968 [2024-04-26 15:10:06.484911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.968 [2024-04-26 15:10:06.485161] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.968 [2024-04-26 15:10:06.485403] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.968 [2024-04-26 15:10:06.485426] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.968 [2024-04-26 15:10:06.485442] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.968 [2024-04-26 15:10:06.488986] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.968 [2024-04-26 15:10:06.498218] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.968 [2024-04-26 15:10:06.498689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.498969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.498998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.969 [2024-04-26 15:10:06.499015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.969 [2024-04-26 15:10:06.499263] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.969 [2024-04-26 15:10:06.499503] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.969 [2024-04-26 15:10:06.499526] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.969 [2024-04-26 15:10:06.499541] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.969 [2024-04-26 15:10:06.503099] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.969 [2024-04-26 15:10:06.512099] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.969 [2024-04-26 15:10:06.512621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.512892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.512950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.969 [2024-04-26 15:10:06.512967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.969 [2024-04-26 15:10:06.513217] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.969 [2024-04-26 15:10:06.513458] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.969 [2024-04-26 15:10:06.513481] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.969 [2024-04-26 15:10:06.513496] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.969 [2024-04-26 15:10:06.517046] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.969 [2024-04-26 15:10:06.526038] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.969 [2024-04-26 15:10:06.526531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.526764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.526816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.969 [2024-04-26 15:10:06.526833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.969 [2024-04-26 15:10:06.527082] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.969 [2024-04-26 15:10:06.527323] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.969 [2024-04-26 15:10:06.527347] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.969 [2024-04-26 15:10:06.527362] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.969 [2024-04-26 15:10:06.530904] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.969 [2024-04-26 15:10:06.539904] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.969 [2024-04-26 15:10:06.540375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.540624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.540671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.969 [2024-04-26 15:10:06.540688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.969 [2024-04-26 15:10:06.540925] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.969 [2024-04-26 15:10:06.541178] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.969 [2024-04-26 15:10:06.541202] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.969 [2024-04-26 15:10:06.541217] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.969 [2024-04-26 15:10:06.544762] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.969 [2024-04-26 15:10:06.553756] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.969 [2024-04-26 15:10:06.554220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.554502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.554553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.969 [2024-04-26 15:10:06.554570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.969 [2024-04-26 15:10:06.554806] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.969 [2024-04-26 15:10:06.555059] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.969 [2024-04-26 15:10:06.555083] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.969 [2024-04-26 15:10:06.555098] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.969 [2024-04-26 15:10:06.558636] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.969 [2024-04-26 15:10:06.567641] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.969 [2024-04-26 15:10:06.568064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.568188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.568216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.969 [2024-04-26 15:10:06.568234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.969 [2024-04-26 15:10:06.568471] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.969 [2024-04-26 15:10:06.568713] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.969 [2024-04-26 15:10:06.568736] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.969 [2024-04-26 15:10:06.568751] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.969 [2024-04-26 15:10:06.572298] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.969 [2024-04-26 15:10:06.581498] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.969 [2024-04-26 15:10:06.581901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.582072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.582102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.969 [2024-04-26 15:10:06.582120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.969 [2024-04-26 15:10:06.582358] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.969 [2024-04-26 15:10:06.582600] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.969 [2024-04-26 15:10:06.582624] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.969 [2024-04-26 15:10:06.582639] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.969 [2024-04-26 15:10:06.586190] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.969 [2024-04-26 15:10:06.595391] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.969 [2024-04-26 15:10:06.595818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.595987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.596027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.969 [2024-04-26 15:10:06.596048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.969 [2024-04-26 15:10:06.596285] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.969 [2024-04-26 15:10:06.596526] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.969 [2024-04-26 15:10:06.596549] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.969 [2024-04-26 15:10:06.596564] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.969 [2024-04-26 15:10:06.600114] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.969 [2024-04-26 15:10:06.609315] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.969 [2024-04-26 15:10:06.609813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.610041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.610071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.969 [2024-04-26 15:10:06.610088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.969 [2024-04-26 15:10:06.610324] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.969 [2024-04-26 15:10:06.610566] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.969 [2024-04-26 15:10:06.610589] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.969 [2024-04-26 15:10:06.610604] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.969 [2024-04-26 15:10:06.614163] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.969 [2024-04-26 15:10:06.623194] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.969 [2024-04-26 15:10:06.623693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.623900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.969 [2024-04-26 15:10:06.623949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.969 [2024-04-26 15:10:06.623967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.970 [2024-04-26 15:10:06.624214] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.970 [2024-04-26 15:10:06.624456] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.970 [2024-04-26 15:10:06.624480] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.970 [2024-04-26 15:10:06.624495] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.970 [2024-04-26 15:10:06.628050] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.970 [2024-04-26 15:10:06.637061] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.970 [2024-04-26 15:10:06.637477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.970 [2024-04-26 15:10:06.637640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.970 [2024-04-26 15:10:06.637690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.970 [2024-04-26 15:10:06.637712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.970 [2024-04-26 15:10:06.637950] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.970 [2024-04-26 15:10:06.638202] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.970 [2024-04-26 15:10:06.638226] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.970 [2024-04-26 15:10:06.638241] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.970 [2024-04-26 15:10:06.641785] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.970 [2024-04-26 15:10:06.651001] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.970 [2024-04-26 15:10:06.651436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.970 [2024-04-26 15:10:06.651597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.970 [2024-04-26 15:10:06.651647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.970 [2024-04-26 15:10:06.651664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.970 [2024-04-26 15:10:06.651901] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.970 [2024-04-26 15:10:06.652157] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.970 [2024-04-26 15:10:06.652181] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.970 [2024-04-26 15:10:06.652196] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.970 [2024-04-26 15:10:06.655741] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.970 [2024-04-26 15:10:06.664976] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.970 [2024-04-26 15:10:06.665387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.970 [2024-04-26 15:10:06.665599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.970 [2024-04-26 15:10:06.665648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.970 [2024-04-26 15:10:06.665665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.970 [2024-04-26 15:10:06.665901] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.970 [2024-04-26 15:10:06.666154] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.970 [2024-04-26 15:10:06.666178] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.970 [2024-04-26 15:10:06.666194] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.970 [2024-04-26 15:10:06.669740] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.970 [2024-04-26 15:10:06.678957] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.970 [2024-04-26 15:10:06.679387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.970 [2024-04-26 15:10:06.679541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.970 [2024-04-26 15:10:06.679580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.970 [2024-04-26 15:10:06.679597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.970 [2024-04-26 15:10:06.679839] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.970 [2024-04-26 15:10:06.680095] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.970 [2024-04-26 15:10:06.680119] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.970 [2024-04-26 15:10:06.680134] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.970 [2024-04-26 15:10:06.683681] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.970 [2024-04-26 15:10:06.692923] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.970 [2024-04-26 15:10:06.693339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.970 [2024-04-26 15:10:06.693515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.970 [2024-04-26 15:10:06.693543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.970 [2024-04-26 15:10:06.693561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.970 [2024-04-26 15:10:06.693797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:20.970 [2024-04-26 15:10:06.694048] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.970 [2024-04-26 15:10:06.694072] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.970 [2024-04-26 15:10:06.694088] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.970 [2024-04-26 15:10:06.697634] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.970 [2024-04-26 15:10:06.706842] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.970 [2024-04-26 15:10:06.707242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.970 [2024-04-26 15:10:06.707425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.970 [2024-04-26 15:10:06.707453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:20.970 [2024-04-26 15:10:06.707470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:20.970 [2024-04-26 15:10:06.707706] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.230 [2024-04-26 15:10:06.707947] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.230 [2024-04-26 15:10:06.707970] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.230 [2024-04-26 15:10:06.707985] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.230 [2024-04-26 15:10:06.711543] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.230 [2024-04-26 15:10:06.720753] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.230 [2024-04-26 15:10:06.721235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.230 [2024-04-26 15:10:06.721481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.230 [2024-04-26 15:10:06.721509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.230 [2024-04-26 15:10:06.721526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.230 [2024-04-26 15:10:06.721769] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.230 [2024-04-26 15:10:06.722027] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.230 [2024-04-26 15:10:06.722052] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.230 [2024-04-26 15:10:06.722068] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.230 [2024-04-26 15:10:06.725615] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.230 [2024-04-26 15:10:06.734660] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.230 [2024-04-26 15:10:06.735161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.230 [2024-04-26 15:10:06.735418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.230 [2024-04-26 15:10:06.735455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.230 [2024-04-26 15:10:06.735472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.230 [2024-04-26 15:10:06.735709] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.230 [2024-04-26 15:10:06.735949] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.230 [2024-04-26 15:10:06.735973] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.230 [2024-04-26 15:10:06.735988] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.230 [2024-04-26 15:10:06.739544] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.230 [2024-04-26 15:10:06.748553] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.230 [2024-04-26 15:10:06.749010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.230 [2024-04-26 15:10:06.749313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.230 [2024-04-26 15:10:06.749341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.230 [2024-04-26 15:10:06.749358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.230 [2024-04-26 15:10:06.749594] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.230 [2024-04-26 15:10:06.749835] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.230 [2024-04-26 15:10:06.749858] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.230 [2024-04-26 15:10:06.749873] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.230 [2024-04-26 15:10:06.753435] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.230 [2024-04-26 15:10:06.762439] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.230 [2024-04-26 15:10:06.762920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.230 [2024-04-26 15:10:06.763106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.230 [2024-04-26 15:10:06.763135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.230 [2024-04-26 15:10:06.763153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.230 [2024-04-26 15:10:06.763389] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.230 [2024-04-26 15:10:06.763630] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.230 [2024-04-26 15:10:06.763658] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.230 [2024-04-26 15:10:06.763674] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.230 [2024-04-26 15:10:06.767235] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.230 [2024-04-26 15:10:06.776634] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.230 [2024-04-26 15:10:06.777071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.230 [2024-04-26 15:10:06.777229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.230 [2024-04-26 15:10:06.777258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.230 [2024-04-26 15:10:06.777276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.230 [2024-04-26 15:10:06.777513] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.230 [2024-04-26 15:10:06.777754] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.230 [2024-04-26 15:10:06.777778] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.230 [2024-04-26 15:10:06.777792] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.230 [2024-04-26 15:10:06.781345] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.230 [2024-04-26 15:10:06.790547] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.230 [2024-04-26 15:10:06.791027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.230 [2024-04-26 15:10:06.791196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.230 [2024-04-26 15:10:06.791224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.230 [2024-04-26 15:10:06.791242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.230 [2024-04-26 15:10:06.791478] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.230 [2024-04-26 15:10:06.791719] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.230 [2024-04-26 15:10:06.791742] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.230 [2024-04-26 15:10:06.791758] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.230 [2024-04-26 15:10:06.795309] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.230 [2024-04-26 15:10:06.804511] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.230 [2024-04-26 15:10:06.804989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.230 [2024-04-26 15:10:06.805131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.230 [2024-04-26 15:10:06.805160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.230 [2024-04-26 15:10:06.805177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.230 [2024-04-26 15:10:06.805414] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.230 [2024-04-26 15:10:06.805655] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.230 [2024-04-26 15:10:06.805679] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.230 [2024-04-26 15:10:06.805699] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.230 [2024-04-26 15:10:06.809252] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.230 [2024-04-26 15:10:06.818457] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.230 [2024-04-26 15:10:06.818918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.230 [2024-04-26 15:10:06.819112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.230 [2024-04-26 15:10:06.819142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.230 [2024-04-26 15:10:06.819159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.230 [2024-04-26 15:10:06.819395] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.230 [2024-04-26 15:10:06.819635] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.231 [2024-04-26 15:10:06.819659] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.231 [2024-04-26 15:10:06.819675] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.231 [2024-04-26 15:10:06.823231] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.231 [2024-04-26 15:10:06.832436] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.231 [2024-04-26 15:10:06.832892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.231 [2024-04-26 15:10:06.833126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.231 [2024-04-26 15:10:06.833156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.231 [2024-04-26 15:10:06.833174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.231 [2024-04-26 15:10:06.833411] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.231 [2024-04-26 15:10:06.833652] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.231 [2024-04-26 15:10:06.833675] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.231 [2024-04-26 15:10:06.833690] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.231 [2024-04-26 15:10:06.837242] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.231 [2024-04-26 15:10:06.846446] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.231 [2024-04-26 15:10:06.846949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.231 [2024-04-26 15:10:06.847185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.231 [2024-04-26 15:10:06.847222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.231 [2024-04-26 15:10:06.847240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.231 [2024-04-26 15:10:06.847476] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.231 [2024-04-26 15:10:06.847717] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.231 [2024-04-26 15:10:06.847740] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.231 [2024-04-26 15:10:06.847755] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.231 [2024-04-26 15:10:06.851311] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.231 [2024-04-26 15:10:06.860312] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.231 [2024-04-26 15:10:06.860774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.231 [2024-04-26 15:10:06.861017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.231 [2024-04-26 15:10:06.861054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.231 [2024-04-26 15:10:06.861072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.231 [2024-04-26 15:10:06.861308] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.231 [2024-04-26 15:10:06.861549] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.231 [2024-04-26 15:10:06.861572] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.231 [2024-04-26 15:10:06.861587] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.231 [2024-04-26 15:10:06.865136] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.231 [2024-04-26 15:10:06.874133] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.231 [2024-04-26 15:10:06.874631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.231 [2024-04-26 15:10:06.874862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.231 [2024-04-26 15:10:06.874890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.231 [2024-04-26 15:10:06.874907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.231 [2024-04-26 15:10:06.875152] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.231 [2024-04-26 15:10:06.875394] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.231 [2024-04-26 15:10:06.875418] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.231 [2024-04-26 15:10:06.875433] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.231 [2024-04-26 15:10:06.878978] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.231 [2024-04-26 15:10:06.887973] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.231 [2024-04-26 15:10:06.888473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.231 [2024-04-26 15:10:06.888751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.231 [2024-04-26 15:10:06.888779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.231 [2024-04-26 15:10:06.888797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.231 [2024-04-26 15:10:06.889045] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.231 [2024-04-26 15:10:06.889286] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.231 [2024-04-26 15:10:06.889310] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.231 [2024-04-26 15:10:06.889325] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.231 [2024-04-26 15:10:06.892866] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.231 [2024-04-26 15:10:06.901854] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.231 [2024-04-26 15:10:06.902327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.231 [2024-04-26 15:10:06.902580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.231 [2024-04-26 15:10:06.902609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.231 [2024-04-26 15:10:06.902626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.231 [2024-04-26 15:10:06.902863] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.231 [2024-04-26 15:10:06.903114] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.231 [2024-04-26 15:10:06.903138] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.231 [2024-04-26 15:10:06.903153] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.231 [2024-04-26 15:10:06.906694] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.231 [2024-04-26 15:10:06.915702] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.231 [2024-04-26 15:10:06.916153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.231 [2024-04-26 15:10:06.916413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.231 [2024-04-26 15:10:06.916442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.231 [2024-04-26 15:10:06.916459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.231 [2024-04-26 15:10:06.916696] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.231 [2024-04-26 15:10:06.916936] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.231 [2024-04-26 15:10:06.916959] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.231 [2024-04-26 15:10:06.916974] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.231 [2024-04-26 15:10:06.920544] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.231 [2024-04-26 15:10:06.929537] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.231 [2024-04-26 15:10:06.930036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.231 [2024-04-26 15:10:06.930297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.231 [2024-04-26 15:10:06.930332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.231 [2024-04-26 15:10:06.930349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.231 [2024-04-26 15:10:06.930585] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.231 [2024-04-26 15:10:06.930826] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.231 [2024-04-26 15:10:06.930849] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.231 [2024-04-26 15:10:06.930864] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.231 [2024-04-26 15:10:06.934417] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.231 [2024-04-26 15:10:06.943411] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.231 [2024-04-26 15:10:06.943903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.231 [2024-04-26 15:10:06.944171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.231 [2024-04-26 15:10:06.944201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.231 [2024-04-26 15:10:06.944219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.231 [2024-04-26 15:10:06.944455] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.231 [2024-04-26 15:10:06.944696] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.231 [2024-04-26 15:10:06.944719] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.231 [2024-04-26 15:10:06.944734] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.232 [2024-04-26 15:10:06.948284] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.232 [2024-04-26 15:10:06.957286] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.232 [2024-04-26 15:10:06.957802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.232 [2024-04-26 15:10:06.958064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.232 [2024-04-26 15:10:06.958098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.232 [2024-04-26 15:10:06.958116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.232 [2024-04-26 15:10:06.958352] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.232 [2024-04-26 15:10:06.958592] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.232 [2024-04-26 15:10:06.958616] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.232 [2024-04-26 15:10:06.958631] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.232 [2024-04-26 15:10:06.962181] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.491 [2024-04-26 15:10:06.971178] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.491 [2024-04-26 15:10:06.971634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.491 [2024-04-26 15:10:06.971883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.491 [2024-04-26 15:10:06.971911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.491 [2024-04-26 15:10:06.971928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.491 [2024-04-26 15:10:06.972174] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.491 [2024-04-26 15:10:06.972415] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.491 [2024-04-26 15:10:06.972439] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.491 [2024-04-26 15:10:06.972454] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.491 [2024-04-26 15:10:06.975999] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.491 [2024-04-26 15:10:06.984998] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.491 [2024-04-26 15:10:06.985481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.491 [2024-04-26 15:10:06.985605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.491 [2024-04-26 15:10:06.985638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.491 [2024-04-26 15:10:06.985657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.491 [2024-04-26 15:10:06.985893] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.491 [2024-04-26 15:10:06.986145] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.491 [2024-04-26 15:10:06.986169] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.491 [2024-04-26 15:10:06.986183] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.491 [2024-04-26 15:10:06.989726] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.491 [2024-04-26 15:10:06.998926] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.491 [2024-04-26 15:10:06.999429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.491 [2024-04-26 15:10:06.999655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.491 [2024-04-26 15:10:06.999683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.491 [2024-04-26 15:10:06.999700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.491 [2024-04-26 15:10:06.999936] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.491 [2024-04-26 15:10:07.000186] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.491 [2024-04-26 15:10:07.000210] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.491 [2024-04-26 15:10:07.000225] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.491 [2024-04-26 15:10:07.003770] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.491 [2024-04-26 15:10:07.012765] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.491 [2024-04-26 15:10:07.013290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.491 [2024-04-26 15:10:07.013565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.492 [2024-04-26 15:10:07.013594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.492 [2024-04-26 15:10:07.013611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.492 [2024-04-26 15:10:07.013847] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.492 [2024-04-26 15:10:07.014099] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.492 [2024-04-26 15:10:07.014123] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.492 [2024-04-26 15:10:07.014138] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.492 [2024-04-26 15:10:07.017682] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.492 [2024-04-26 15:10:07.026679] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.492 [2024-04-26 15:10:07.027167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.492 [2024-04-26 15:10:07.027426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.492 [2024-04-26 15:10:07.027454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.492 [2024-04-26 15:10:07.027477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.492 [2024-04-26 15:10:07.027715] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.492 [2024-04-26 15:10:07.027956] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.492 [2024-04-26 15:10:07.027980] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.492 [2024-04-26 15:10:07.027995] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.492 [2024-04-26 15:10:07.031547] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.492 [2024-04-26 15:10:07.040539] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.492 [2024-04-26 15:10:07.041042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.492 [2024-04-26 15:10:07.041227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.492 [2024-04-26 15:10:07.041255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.492 [2024-04-26 15:10:07.041272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.492 [2024-04-26 15:10:07.041508] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.492 [2024-04-26 15:10:07.041749] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.492 [2024-04-26 15:10:07.041772] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.492 [2024-04-26 15:10:07.041788] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.492 [2024-04-26 15:10:07.045342] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.492 [2024-04-26 15:10:07.054357] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.492 [2024-04-26 15:10:07.054849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.492 [2024-04-26 15:10:07.055108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.492 [2024-04-26 15:10:07.055137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.492 [2024-04-26 15:10:07.055154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.492 [2024-04-26 15:10:07.055391] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.492 [2024-04-26 15:10:07.055632] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.492 [2024-04-26 15:10:07.055656] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.492 [2024-04-26 15:10:07.055671] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.492 [2024-04-26 15:10:07.059222] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.492 [2024-04-26 15:10:07.068215] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.492 [2024-04-26 15:10:07.068678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.492 [2024-04-26 15:10:07.068930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.492 [2024-04-26 15:10:07.068958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.492 [2024-04-26 15:10:07.068975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.492 [2024-04-26 15:10:07.069227] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.492 [2024-04-26 15:10:07.069469] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.492 [2024-04-26 15:10:07.069492] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.492 [2024-04-26 15:10:07.069508] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.492 [2024-04-26 15:10:07.073056] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.492 [2024-04-26 15:10:07.082044] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.492 [2024-04-26 15:10:07.082537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.492 [2024-04-26 15:10:07.082778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.492 [2024-04-26 15:10:07.082806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.492 [2024-04-26 15:10:07.082824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.492 [2024-04-26 15:10:07.083071] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.492 [2024-04-26 15:10:07.083312] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.492 [2024-04-26 15:10:07.083336] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.492 [2024-04-26 15:10:07.083352] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.492 [2024-04-26 15:10:07.086919] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.492 [2024-04-26 15:10:07.095907] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.492 [2024-04-26 15:10:07.096408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.492 [2024-04-26 15:10:07.096675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.492 [2024-04-26 15:10:07.096706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.492 [2024-04-26 15:10:07.096723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.492 [2024-04-26 15:10:07.096960] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.492 [2024-04-26 15:10:07.097211] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.492 [2024-04-26 15:10:07.097235] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.492 [2024-04-26 15:10:07.097251] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.492 [2024-04-26 15:10:07.100792] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.492 [2024-04-26 15:10:07.109787] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.492 [2024-04-26 15:10:07.110299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.492 [2024-04-26 15:10:07.110562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.492 [2024-04-26 15:10:07.110590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.492 [2024-04-26 15:10:07.110607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.492 [2024-04-26 15:10:07.110844] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.492 [2024-04-26 15:10:07.111102] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.492 [2024-04-26 15:10:07.111126] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.492 [2024-04-26 15:10:07.111142] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.492 [2024-04-26 15:10:07.114682] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.492 [2024-04-26 15:10:07.123686] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.492 [2024-04-26 15:10:07.124182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.492 [2024-04-26 15:10:07.124408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.492 [2024-04-26 15:10:07.124436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.492 [2024-04-26 15:10:07.124454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.492 [2024-04-26 15:10:07.124690] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.492 [2024-04-26 15:10:07.124931] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.492 [2024-04-26 15:10:07.124954] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.492 [2024-04-26 15:10:07.124969] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.492 [2024-04-26 15:10:07.128520] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.492 [2024-04-26 15:10:07.137512] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.492 [2024-04-26 15:10:07.138003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.492 [2024-04-26 15:10:07.138272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.492 [2024-04-26 15:10:07.138300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.492 [2024-04-26 15:10:07.138317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.492 [2024-04-26 15:10:07.138554] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.492 [2024-04-26 15:10:07.138794] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.493 [2024-04-26 15:10:07.138818] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.493 [2024-04-26 15:10:07.138833] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.493 [2024-04-26 15:10:07.142383] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.493 [2024-04-26 15:10:07.151377] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.493 [2024-04-26 15:10:07.151831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.493 [2024-04-26 15:10:07.152009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.493 [2024-04-26 15:10:07.152051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.493 [2024-04-26 15:10:07.152069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.493 [2024-04-26 15:10:07.152306] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.493 [2024-04-26 15:10:07.152547] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.493 [2024-04-26 15:10:07.152575] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.493 [2024-04-26 15:10:07.152591] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.493 [2024-04-26 15:10:07.156143] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.493 [2024-04-26 15:10:07.165346] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.493 [2024-04-26 15:10:07.165833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.493 [2024-04-26 15:10:07.166052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.493 [2024-04-26 15:10:07.166082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.493 [2024-04-26 15:10:07.166098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.493 [2024-04-26 15:10:07.166334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.493 [2024-04-26 15:10:07.166575] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.493 [2024-04-26 15:10:07.166598] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.493 [2024-04-26 15:10:07.166613] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.493 [2024-04-26 15:10:07.170164] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.493 [2024-04-26 15:10:07.179195] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.493 [2024-04-26 15:10:07.179705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.493 [2024-04-26 15:10:07.179969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.493 [2024-04-26 15:10:07.179998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.493 [2024-04-26 15:10:07.180015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.493 [2024-04-26 15:10:07.180263] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.493 [2024-04-26 15:10:07.180504] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.493 [2024-04-26 15:10:07.180528] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.493 [2024-04-26 15:10:07.180543] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.493 [2024-04-26 15:10:07.184091] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.493 [2024-04-26 15:10:07.193086] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.493 [2024-04-26 15:10:07.193574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.493 [2024-04-26 15:10:07.193750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.493 [2024-04-26 15:10:07.193778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.493 [2024-04-26 15:10:07.193795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.493 [2024-04-26 15:10:07.194042] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.493 [2024-04-26 15:10:07.194283] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.493 [2024-04-26 15:10:07.194307] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.493 [2024-04-26 15:10:07.194328] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.493 [2024-04-26 15:10:07.197869] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.493 [2024-04-26 15:10:07.207075] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.493 [2024-04-26 15:10:07.207558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.493 [2024-04-26 15:10:07.207770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.493 [2024-04-26 15:10:07.207798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.493 [2024-04-26 15:10:07.207815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.493 [2024-04-26 15:10:07.208063] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.493 [2024-04-26 15:10:07.208305] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.493 [2024-04-26 15:10:07.208329] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.493 [2024-04-26 15:10:07.208344] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.493 [2024-04-26 15:10:07.211886] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.493 [2024-04-26 15:10:07.220878] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.493 [2024-04-26 15:10:07.221356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.493 [2024-04-26 15:10:07.221625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.493 [2024-04-26 15:10:07.221654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.493 [2024-04-26 15:10:07.221671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.493 [2024-04-26 15:10:07.221908] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.493 [2024-04-26 15:10:07.222160] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.493 [2024-04-26 15:10:07.222184] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.493 [2024-04-26 15:10:07.222200] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.493 [2024-04-26 15:10:07.225742] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.752 [2024-04-26 15:10:07.234743] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.752 [2024-04-26 15:10:07.235200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.752 [2024-04-26 15:10:07.235388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.752 [2024-04-26 15:10:07.235416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.752 [2024-04-26 15:10:07.235433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.752 [2024-04-26 15:10:07.235670] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.752 [2024-04-26 15:10:07.235911] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.752 [2024-04-26 15:10:07.235935] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.752 [2024-04-26 15:10:07.235950] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.752 [2024-04-26 15:10:07.239508] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.752 [2024-04-26 15:10:07.248712] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.752 [2024-04-26 15:10:07.249209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.752 [2024-04-26 15:10:07.249468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.752 [2024-04-26 15:10:07.249502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.752 [2024-04-26 15:10:07.249519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.752 [2024-04-26 15:10:07.249756] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.752 [2024-04-26 15:10:07.249997] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.752 [2024-04-26 15:10:07.250026] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.752 [2024-04-26 15:10:07.250043] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.752 [2024-04-26 15:10:07.253595] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.752 [2024-04-26 15:10:07.262592] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.752 [2024-04-26 15:10:07.263038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.752 [2024-04-26 15:10:07.263276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.752 [2024-04-26 15:10:07.263304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.752 [2024-04-26 15:10:07.263321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.752 [2024-04-26 15:10:07.263558] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.752 [2024-04-26 15:10:07.263798] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.752 [2024-04-26 15:10:07.263823] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.753 [2024-04-26 15:10:07.263838] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.753 [2024-04-26 15:10:07.267391] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.753 [2024-04-26 15:10:07.276591] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.753 [2024-04-26 15:10:07.277049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.753 [2024-04-26 15:10:07.277241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.753 [2024-04-26 15:10:07.277270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.753 [2024-04-26 15:10:07.277287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.753 [2024-04-26 15:10:07.277525] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.753 [2024-04-26 15:10:07.277765] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.753 [2024-04-26 15:10:07.277788] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.753 [2024-04-26 15:10:07.277803] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.753 [2024-04-26 15:10:07.281357] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.753 [2024-04-26 15:10:07.290552] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.753 [2024-04-26 15:10:07.291004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.753 [2024-04-26 15:10:07.291284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.753 [2024-04-26 15:10:07.291321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.753 [2024-04-26 15:10:07.291338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.753 [2024-04-26 15:10:07.291575] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.753 [2024-04-26 15:10:07.291816] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.753 [2024-04-26 15:10:07.291839] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.753 [2024-04-26 15:10:07.291854] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.753 [2024-04-26 15:10:07.295406] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.753 [2024-04-26 15:10:07.304399] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.753 [2024-04-26 15:10:07.304904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.753 [2024-04-26 15:10:07.305125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.753 [2024-04-26 15:10:07.305155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.753 [2024-04-26 15:10:07.305172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.753 [2024-04-26 15:10:07.305409] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.753 [2024-04-26 15:10:07.305649] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.753 [2024-04-26 15:10:07.305673] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.753 [2024-04-26 15:10:07.305688] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.753 [2024-04-26 15:10:07.309238] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.753 [2024-04-26 15:10:07.318231] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.753 [2024-04-26 15:10:07.318688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.753 [2024-04-26 15:10:07.318941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.753 [2024-04-26 15:10:07.318970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.753 [2024-04-26 15:10:07.318987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.753 [2024-04-26 15:10:07.319233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.753 [2024-04-26 15:10:07.319475] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.753 [2024-04-26 15:10:07.319498] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.753 [2024-04-26 15:10:07.319514] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.753 [2024-04-26 15:10:07.323063] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.753 [2024-04-26 15:10:07.332073] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.753 [2024-04-26 15:10:07.332463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.753 [2024-04-26 15:10:07.332646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.753 [2024-04-26 15:10:07.332675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.753 [2024-04-26 15:10:07.332692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.753 [2024-04-26 15:10:07.332930] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.753 [2024-04-26 15:10:07.333182] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.753 [2024-04-26 15:10:07.333207] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.753 [2024-04-26 15:10:07.333222] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.753 [2024-04-26 15:10:07.336764] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.753 [2024-04-26 15:10:07.345974] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.753 [2024-04-26 15:10:07.346453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.753 [2024-04-26 15:10:07.346653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.753 [2024-04-26 15:10:07.346682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.753 [2024-04-26 15:10:07.346699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.753 [2024-04-26 15:10:07.346934] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.753 [2024-04-26 15:10:07.347185] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.753 [2024-04-26 15:10:07.347209] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.753 [2024-04-26 15:10:07.347225] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.753 [2024-04-26 15:10:07.350775] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.753 [2024-04-26 15:10:07.359776] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.753 [2024-04-26 15:10:07.360211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.753 [2024-04-26 15:10:07.360438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.753 [2024-04-26 15:10:07.360466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.753 [2024-04-26 15:10:07.360484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.753 [2024-04-26 15:10:07.360732] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.753 [2024-04-26 15:10:07.360971] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.753 [2024-04-26 15:10:07.360995] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.753 [2024-04-26 15:10:07.361010] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.753 [2024-04-26 15:10:07.364562] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.753 [2024-04-26 15:10:07.373768] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.753 [2024-04-26 15:10:07.374286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.753 [2024-04-26 15:10:07.374508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.753 [2024-04-26 15:10:07.374545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.753 [2024-04-26 15:10:07.374563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.753 [2024-04-26 15:10:07.374800] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.753 [2024-04-26 15:10:07.375050] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.753 [2024-04-26 15:10:07.375075] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.753 [2024-04-26 15:10:07.375090] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.753 [2024-04-26 15:10:07.378636] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.753 [2024-04-26 15:10:07.387627] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.753 [2024-04-26 15:10:07.388125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.753 [2024-04-26 15:10:07.388339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.753 [2024-04-26 15:10:07.388367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.753 [2024-04-26 15:10:07.388384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.753 [2024-04-26 15:10:07.388621] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.753 [2024-04-26 15:10:07.388862] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.753 [2024-04-26 15:10:07.388885] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.753 [2024-04-26 15:10:07.388900] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.753 [2024-04-26 15:10:07.392451] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.753 [2024-04-26 15:10:07.401443] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.753 [2024-04-26 15:10:07.401937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.754 [2024-04-26 15:10:07.402197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.754 [2024-04-26 15:10:07.402227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.754 [2024-04-26 15:10:07.402244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.754 [2024-04-26 15:10:07.402481] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.754 [2024-04-26 15:10:07.402722] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.754 [2024-04-26 15:10:07.402746] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.754 [2024-04-26 15:10:07.402760] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.754 [2024-04-26 15:10:07.406316] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.754 [2024-04-26 15:10:07.415317] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.754 [2024-04-26 15:10:07.415735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.754 [2024-04-26 15:10:07.415948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.754 [2024-04-26 15:10:07.415977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.754 [2024-04-26 15:10:07.415999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.754 [2024-04-26 15:10:07.416249] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.754 [2024-04-26 15:10:07.416498] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.754 [2024-04-26 15:10:07.416522] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.754 [2024-04-26 15:10:07.416537] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.754 [2024-04-26 15:10:07.420090] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.754 [2024-04-26 15:10:07.429309] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.754 [2024-04-26 15:10:07.429749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.754 [2024-04-26 15:10:07.429969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.754 [2024-04-26 15:10:07.429997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.754 [2024-04-26 15:10:07.430014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.754 [2024-04-26 15:10:07.430261] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.754 [2024-04-26 15:10:07.430502] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.754 [2024-04-26 15:10:07.430526] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.754 [2024-04-26 15:10:07.430541] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.754 [2024-04-26 15:10:07.434097] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.754 [2024-04-26 15:10:07.443330] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.754 [2024-04-26 15:10:07.443752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.754 [2024-04-26 15:10:07.443908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.754 [2024-04-26 15:10:07.443936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.754 [2024-04-26 15:10:07.443953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.754 [2024-04-26 15:10:07.444198] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.754 [2024-04-26 15:10:07.444440] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.754 [2024-04-26 15:10:07.444463] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.754 [2024-04-26 15:10:07.444478] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.754 [2024-04-26 15:10:07.448029] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.754 [2024-04-26 15:10:07.457249] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.754 [2024-04-26 15:10:07.457671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.754 [2024-04-26 15:10:07.457827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.754 [2024-04-26 15:10:07.457855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.754 [2024-04-26 15:10:07.457873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.754 [2024-04-26 15:10:07.458125] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.754 [2024-04-26 15:10:07.458366] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.754 [2024-04-26 15:10:07.458389] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.754 [2024-04-26 15:10:07.458404] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.754 [2024-04-26 15:10:07.461948] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.754 [2024-04-26 15:10:07.471176] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.754 [2024-04-26 15:10:07.471591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.754 [2024-04-26 15:10:07.471717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.754 [2024-04-26 15:10:07.471745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.754 [2024-04-26 15:10:07.471762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.754 [2024-04-26 15:10:07.471998] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.754 [2024-04-26 15:10:07.472250] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.754 [2024-04-26 15:10:07.472275] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.754 [2024-04-26 15:10:07.472290] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.754 [2024-04-26 15:10:07.475831] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.754 [2024-04-26 15:10:07.485054] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.754 [2024-04-26 15:10:07.485433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.754 [2024-04-26 15:10:07.485556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.754 [2024-04-26 15:10:07.485584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:21.754 [2024-04-26 15:10:07.485601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:21.754 [2024-04-26 15:10:07.485837] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:21.754 [2024-04-26 15:10:07.486089] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.754 [2024-04-26 15:10:07.486115] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.754 [2024-04-26 15:10:07.486130] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.754 [2024-04-26 15:10:07.489677] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.014 [2024-04-26 15:10:07.498889] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.014 [2024-04-26 15:10:07.499267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.014 [2024-04-26 15:10:07.499450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.014 [2024-04-26 15:10:07.499478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.014 [2024-04-26 15:10:07.499495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.014 [2024-04-26 15:10:07.499732] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.014 [2024-04-26 15:10:07.499979] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.014 [2024-04-26 15:10:07.500003] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.014 [2024-04-26 15:10:07.500027] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.014 [2024-04-26 15:10:07.503585] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.014 [2024-04-26 15:10:07.512800] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.014 [2024-04-26 15:10:07.513173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.014 [2024-04-26 15:10:07.513313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.014 [2024-04-26 15:10:07.513341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.014 [2024-04-26 15:10:07.513358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.014 [2024-04-26 15:10:07.513594] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.014 [2024-04-26 15:10:07.513836] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.014 [2024-04-26 15:10:07.513859] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.014 [2024-04-26 15:10:07.513874] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.014 [2024-04-26 15:10:07.517432] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.014 [2024-04-26 15:10:07.526655] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.014 [2024-04-26 15:10:07.527085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.014 [2024-04-26 15:10:07.527240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.014 [2024-04-26 15:10:07.527268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.014 [2024-04-26 15:10:07.527285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.014 [2024-04-26 15:10:07.527522] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.014 [2024-04-26 15:10:07.527762] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.014 [2024-04-26 15:10:07.527786] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.014 [2024-04-26 15:10:07.527801] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.014 [2024-04-26 15:10:07.531353] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.014 [2024-04-26 15:10:07.540568] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.014 [2024-04-26 15:10:07.541033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.014 [2024-04-26 15:10:07.541169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.014 [2024-04-26 15:10:07.541198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.014 [2024-04-26 15:10:07.541215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.014 [2024-04-26 15:10:07.541451] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.014 [2024-04-26 15:10:07.541692] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.014 [2024-04-26 15:10:07.541721] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.014 [2024-04-26 15:10:07.541736] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.014 [2024-04-26 15:10:07.545294] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.014 [2024-04-26 15:10:07.554512] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.014 [2024-04-26 15:10:07.554973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.014 [2024-04-26 15:10:07.555106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.015 [2024-04-26 15:10:07.555135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.015 [2024-04-26 15:10:07.555152] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.015 [2024-04-26 15:10:07.555388] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.015 [2024-04-26 15:10:07.555630] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.015 [2024-04-26 15:10:07.555653] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.015 [2024-04-26 15:10:07.555669] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.015 [2024-04-26 15:10:07.559221] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.015 [2024-04-26 15:10:07.568437] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.015 [2024-04-26 15:10:07.568959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.015 [2024-04-26 15:10:07.569129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.015 [2024-04-26 15:10:07.569158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.015 [2024-04-26 15:10:07.569175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.015 [2024-04-26 15:10:07.569411] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.015 [2024-04-26 15:10:07.569652] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.015 [2024-04-26 15:10:07.569675] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.015 [2024-04-26 15:10:07.569690] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.015 [2024-04-26 15:10:07.573240] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.015 [2024-04-26 15:10:07.582450] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.015 [2024-04-26 15:10:07.582908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.015 [2024-04-26 15:10:07.583087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.015 [2024-04-26 15:10:07.583116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.015 [2024-04-26 15:10:07.583134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.015 [2024-04-26 15:10:07.583370] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.015 [2024-04-26 15:10:07.583611] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.015 [2024-04-26 15:10:07.583635] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.015 [2024-04-26 15:10:07.583656] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.015 [2024-04-26 15:10:07.587205] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.015 [2024-04-26 15:10:07.596414] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.015 [2024-04-26 15:10:07.596871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.015 [2024-04-26 15:10:07.597069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.015 [2024-04-26 15:10:07.597098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.015 [2024-04-26 15:10:07.597115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.015 [2024-04-26 15:10:07.597351] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.015 [2024-04-26 15:10:07.597592] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.015 [2024-04-26 15:10:07.597615] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.015 [2024-04-26 15:10:07.597631] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.015 [2024-04-26 15:10:07.601187] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.015 [2024-04-26 15:10:07.610391] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.015 [2024-04-26 15:10:07.610848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.015 [2024-04-26 15:10:07.611043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.015 [2024-04-26 15:10:07.611072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.015 [2024-04-26 15:10:07.611089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.015 [2024-04-26 15:10:07.611326] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.015 [2024-04-26 15:10:07.611567] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.015 [2024-04-26 15:10:07.611590] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.015 [2024-04-26 15:10:07.611605] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.015 [2024-04-26 15:10:07.615156] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.015 [2024-04-26 15:10:07.624350] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.015 [2024-04-26 15:10:07.624848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.015 [2024-04-26 15:10:07.625070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.015 [2024-04-26 15:10:07.625100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.015 [2024-04-26 15:10:07.625117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.015 [2024-04-26 15:10:07.625355] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.015 [2024-04-26 15:10:07.625596] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.015 [2024-04-26 15:10:07.625619] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.015 [2024-04-26 15:10:07.625634] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.015 [2024-04-26 15:10:07.629190] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.015 [2024-04-26 15:10:07.638190] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.015 [2024-04-26 15:10:07.638671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.015 [2024-04-26 15:10:07.638887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.015 [2024-04-26 15:10:07.638916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.015 [2024-04-26 15:10:07.638933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.015 [2024-04-26 15:10:07.639180] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.015 [2024-04-26 15:10:07.639422] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.015 [2024-04-26 15:10:07.639446] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.015 [2024-04-26 15:10:07.639461] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.015 [2024-04-26 15:10:07.643002] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.015 [2024-04-26 15:10:07.652216] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.015 [2024-04-26 15:10:07.652670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.015 [2024-04-26 15:10:07.652908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.015 [2024-04-26 15:10:07.652936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.015 [2024-04-26 15:10:07.652953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.015 [2024-04-26 15:10:07.653200] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.015 [2024-04-26 15:10:07.653442] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.015 [2024-04-26 15:10:07.653465] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.015 [2024-04-26 15:10:07.653480] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.015 [2024-04-26 15:10:07.657027] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.015 [2024-04-26 15:10:07.666012] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.015 [2024-04-26 15:10:07.666505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.015 [2024-04-26 15:10:07.666724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.015 [2024-04-26 15:10:07.666752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.015 [2024-04-26 15:10:07.666769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.015 [2024-04-26 15:10:07.667006] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.015 [2024-04-26 15:10:07.667258] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.015 [2024-04-26 15:10:07.667281] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.015 [2024-04-26 15:10:07.667296] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.015 [2024-04-26 15:10:07.670839] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.015 [2024-04-26 15:10:07.679830] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.015 [2024-04-26 15:10:07.680298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.015 [2024-04-26 15:10:07.680494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.015 [2024-04-26 15:10:07.680523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.015 [2024-04-26 15:10:07.680540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.015 [2024-04-26 15:10:07.680777] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.015 [2024-04-26 15:10:07.681017] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.016 [2024-04-26 15:10:07.681052] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.016 [2024-04-26 15:10:07.681074] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.016 [2024-04-26 15:10:07.684616] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.016 [2024-04-26 15:10:07.693818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.016 [2024-04-26 15:10:07.694301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.016 [2024-04-26 15:10:07.694532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.016 [2024-04-26 15:10:07.694560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.016 [2024-04-26 15:10:07.694577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.016 [2024-04-26 15:10:07.694813] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.016 [2024-04-26 15:10:07.695065] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.016 [2024-04-26 15:10:07.695089] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.016 [2024-04-26 15:10:07.695104] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.016 [2024-04-26 15:10:07.698645] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3902830 Killed "${NVMF_APP[@]}" "$@" 00:29:22.016 15:10:07 -- host/bdevperf.sh@36 -- # tgt_init 00:29:22.016 15:10:07 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:22.016 15:10:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:22.016 15:10:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:22.016 15:10:07 -- common/autotest_common.sh@10 -- # set +x 00:29:22.016 [2024-04-26 15:10:07.707648] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.016 [2024-04-26 15:10:07.708068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.016 [2024-04-26 15:10:07.708233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.016 [2024-04-26 15:10:07.708261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.016 [2024-04-26 15:10:07.708279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.016 [2024-04-26 15:10:07.708515] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.016 [2024-04-26 15:10:07.708755] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.016 [2024-04-26 15:10:07.708779] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.016 [2024-04-26 15:10:07.708801] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.016 15:10:07 -- nvmf/common.sh@470 -- # nvmfpid=3903784 00:29:22.016 15:10:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:22.016 15:10:07 -- nvmf/common.sh@471 -- # waitforlisten 3903784 00:29:22.016 15:10:07 -- common/autotest_common.sh@817 -- # '[' -z 3903784 ']' 00:29:22.016 15:10:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.016 15:10:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:22.016 15:10:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.016 15:10:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:22.016 15:10:07 -- common/autotest_common.sh@10 -- # set +x 00:29:22.016 [2024-04-26 15:10:07.712360] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.016 [2024-04-26 15:10:07.721566] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.016 [2024-04-26 15:10:07.721956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.016 [2024-04-26 15:10:07.722128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.016 [2024-04-26 15:10:07.722158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.016 [2024-04-26 15:10:07.722176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.016 [2024-04-26 15:10:07.722413] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.016 [2024-04-26 15:10:07.722654] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.016 [2024-04-26 15:10:07.722678] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.016 [2024-04-26 15:10:07.722693] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.016 [2024-04-26 15:10:07.726247] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.016 [2024-04-26 15:10:07.735460] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.016 [2024-04-26 15:10:07.735907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.016 [2024-04-26 15:10:07.736064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.016 [2024-04-26 15:10:07.736104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.016 [2024-04-26 15:10:07.736122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.016 [2024-04-26 15:10:07.736360] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.016 [2024-04-26 15:10:07.736603] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.016 [2024-04-26 15:10:07.736627] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.016 [2024-04-26 15:10:07.736643] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.016 [2024-04-26 15:10:07.740196] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.016 [2024-04-26 15:10:07.749415] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.016 [2024-04-26 15:10:07.749884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.016 [2024-04-26 15:10:07.750065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.016 [2024-04-26 15:10:07.750095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.016 [2024-04-26 15:10:07.750120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.016 [2024-04-26 15:10:07.750359] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.016 [2024-04-26 15:10:07.750600] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.016 [2024-04-26 15:10:07.750623] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.016 [2024-04-26 15:10:07.750638] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.276 [2024-04-26 15:10:07.754201] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.276 [2024-04-26 15:10:07.756364] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:29:22.276 [2024-04-26 15:10:07.756440] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.276 [2024-04-26 15:10:07.763413] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.276 [2024-04-26 15:10:07.763846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.276 [2024-04-26 15:10:07.763954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.276 [2024-04-26 15:10:07.763983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.276 [2024-04-26 15:10:07.764001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.276 [2024-04-26 15:10:07.764247] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.276 [2024-04-26 15:10:07.764490] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.276 [2024-04-26 15:10:07.764514] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.276 [2024-04-26 15:10:07.764529] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.276 [2024-04-26 15:10:07.768101] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.276 [2024-04-26 15:10:07.777322] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.276 [2024-04-26 15:10:07.777830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.276 [2024-04-26 15:10:07.777952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.276 [2024-04-26 15:10:07.777980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.276 [2024-04-26 15:10:07.777998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.276 [2024-04-26 15:10:07.778245] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.276 [2024-04-26 15:10:07.778487] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.276 [2024-04-26 15:10:07.778511] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.276 [2024-04-26 15:10:07.778526] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.276 [2024-04-26 15:10:07.782079] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.276 [2024-04-26 15:10:07.790833] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.276 [2024-04-26 15:10:07.791225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.276 [2024-04-26 15:10:07.791405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.276 [2024-04-26 15:10:07.791430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.276 [2024-04-26 15:10:07.791445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.276 [2024-04-26 15:10:07.791646] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.276 [2024-04-26 15:10:07.791852] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.276 [2024-04-26 15:10:07.791873] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.276 [2024-04-26 15:10:07.791886] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.276 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.276 [2024-04-26 15:10:07.795025] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.276 [2024-04-26 15:10:07.801979] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:22.276 [2024-04-26 15:10:07.804658] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.276 [2024-04-26 15:10:07.805086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.276 [2024-04-26 15:10:07.805232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.276 [2024-04-26 15:10:07.805258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.276 [2024-04-26 15:10:07.805275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.276 [2024-04-26 15:10:07.805498] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.276 [2024-04-26 15:10:07.805708] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.276 [2024-04-26 15:10:07.805729] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.276 [2024-04-26 15:10:07.805743] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.276 [2024-04-26 15:10:07.808923] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.276 [2024-04-26 15:10:07.818297] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.276 [2024-04-26 15:10:07.818743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.276 [2024-04-26 15:10:07.818880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.276 [2024-04-26 15:10:07.818909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.276 [2024-04-26 15:10:07.818926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.276 [2024-04-26 15:10:07.819178] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.276 [2024-04-26 15:10:07.819435] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.276 [2024-04-26 15:10:07.819460] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.276 [2024-04-26 15:10:07.819475] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.276 [2024-04-26 15:10:07.823045] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.276 [2024-04-26 15:10:07.831965] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.276 [2024-04-26 15:10:07.832404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.276 [2024-04-26 15:10:07.832512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.276 [2024-04-26 15:10:07.832540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.276 [2024-04-26 15:10:07.832557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.276 [2024-04-26 15:10:07.832793] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.276 [2024-04-26 15:10:07.832837] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:22.276 [2024-04-26 15:10:07.833059] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.276 [2024-04-26 15:10:07.833081] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.276 [2024-04-26 15:10:07.833095] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.276 [2024-04-26 15:10:07.836635] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.276 [2024-04-26 15:10:07.845860] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.276 [2024-04-26 15:10:07.846474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.276 [2024-04-26 15:10:07.846620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.276 [2024-04-26 15:10:07.846650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.276 [2024-04-26 15:10:07.846670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.277 [2024-04-26 15:10:07.846923] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.277 [2024-04-26 15:10:07.847177] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.277 [2024-04-26 15:10:07.847200] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.277 [2024-04-26 15:10:07.847217] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.277 [2024-04-26 15:10:07.850753] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.277 [2024-04-26 15:10:07.859774] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.277 [2024-04-26 15:10:07.860241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.277 [2024-04-26 15:10:07.860410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.277 [2024-04-26 15:10:07.860439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.277 [2024-04-26 15:10:07.860459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.277 [2024-04-26 15:10:07.860698] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.277 [2024-04-26 15:10:07.860946] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.277 [2024-04-26 15:10:07.860970] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.277 [2024-04-26 15:10:07.860985] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.277 [2024-04-26 15:10:07.864584] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.277 [2024-04-26 15:10:07.873611] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.277 [2024-04-26 15:10:07.874071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.277 [2024-04-26 15:10:07.874222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.277 [2024-04-26 15:10:07.874248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.277 [2024-04-26 15:10:07.874264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.277 [2024-04-26 15:10:07.874523] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.277 [2024-04-26 15:10:07.874766] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.277 [2024-04-26 15:10:07.874790] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.277 [2024-04-26 15:10:07.874806] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.277 [2024-04-26 15:10:07.878386] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.277 [2024-04-26 15:10:07.887617] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.277 [2024-04-26 15:10:07.888154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.277 [2024-04-26 15:10:07.888314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.277 [2024-04-26 15:10:07.888339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.277 [2024-04-26 15:10:07.888356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.277 [2024-04-26 15:10:07.888616] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.277 [2024-04-26 15:10:07.888863] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.277 [2024-04-26 15:10:07.888887] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.277 [2024-04-26 15:10:07.888904] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.277 [2024-04-26 15:10:07.892472] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.277 [2024-04-26 15:10:07.901553] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.277 [2024-04-26 15:10:07.902027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.277 [2024-04-26 15:10:07.902170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.277 [2024-04-26 15:10:07.902195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.277 [2024-04-26 15:10:07.902211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.277 [2024-04-26 15:10:07.902454] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.277 [2024-04-26 15:10:07.902697] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.277 [2024-04-26 15:10:07.902721] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.277 [2024-04-26 15:10:07.902737] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.277 [2024-04-26 15:10:07.906229] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.277 [2024-04-26 15:10:07.915326] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.277 [2024-04-26 15:10:07.915826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.277 [2024-04-26 15:10:07.916077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.277 [2024-04-26 15:10:07.916114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.277 [2024-04-26 15:10:07.916130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.277 [2024-04-26 15:10:07.916358] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.277 [2024-04-26 15:10:07.916608] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.277 [2024-04-26 15:10:07.916632] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.277 [2024-04-26 15:10:07.916647] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.277 [2024-04-26 15:10:07.920150] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.277 [2024-04-26 15:10:07.924873] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.277 [2024-04-26 15:10:07.924907] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.277 [2024-04-26 15:10:07.924936] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.277 [2024-04-26 15:10:07.924948] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.277 [2024-04-26 15:10:07.924958] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.277 [2024-04-26 15:10:07.925015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:22.277 [2024-04-26 15:10:07.925144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:22.277 [2024-04-26 15:10:07.925148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.277 [2024-04-26 15:10:07.928900] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.277 [2024-04-26 15:10:07.929389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.277 [2024-04-26 15:10:07.929597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.277 [2024-04-26 15:10:07.929622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.277 [2024-04-26 15:10:07.929638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.277 [2024-04-26 15:10:07.929868] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.277 [2024-04-26 15:10:07.930119] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.277 [2024-04-26 15:10:07.930142] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.277 [2024-04-26 15:10:07.930157] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.277 [2024-04-26 15:10:07.933321] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.277 [2024-04-26 15:10:07.942535] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.277 [2024-04-26 15:10:07.943153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.277 [2024-04-26 15:10:07.943375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.277 [2024-04-26 15:10:07.943401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.277 [2024-04-26 15:10:07.943430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.277 [2024-04-26 15:10:07.943664] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.277 [2024-04-26 15:10:07.943879] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.277 [2024-04-26 15:10:07.943911] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.277 [2024-04-26 15:10:07.943927] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.277 [2024-04-26 15:10:07.947153] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.277 [2024-04-26 15:10:07.956143] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.277 [2024-04-26 15:10:07.956755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.277 [2024-04-26 15:10:07.956938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.277 [2024-04-26 15:10:07.956963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.277 [2024-04-26 15:10:07.956991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.277 [2024-04-26 15:10:07.957255] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.277 [2024-04-26 15:10:07.957489] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.277 [2024-04-26 15:10:07.957510] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.277 [2024-04-26 15:10:07.957527] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.277 [2024-04-26 15:10:07.960697] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.277 [2024-04-26 15:10:07.969634] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.277 [2024-04-26 15:10:07.970184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.278 [2024-04-26 15:10:07.970387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.278 [2024-04-26 15:10:07.970422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.278 [2024-04-26 15:10:07.970440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.278 [2024-04-26 15:10:07.970658] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.278 [2024-04-26 15:10:07.970873] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.278 [2024-04-26 15:10:07.970894] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.278 [2024-04-26 15:10:07.970910] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.278 [2024-04-26 15:10:07.974016] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.278 [2024-04-26 15:10:07.983374] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.278 [2024-04-26 15:10:07.983914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.278 [2024-04-26 15:10:07.984130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.278 [2024-04-26 15:10:07.984159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.278 [2024-04-26 15:10:07.984177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.278 [2024-04-26 15:10:07.984398] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.278 [2024-04-26 15:10:07.984619] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.278 [2024-04-26 15:10:07.984640] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.278 [2024-04-26 15:10:07.984669] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.278 [2024-04-26 15:10:07.987922] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.278 [2024-04-26 15:10:07.996899] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.278 [2024-04-26 15:10:07.997513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.278 [2024-04-26 15:10:07.997723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.278 [2024-04-26 15:10:07.997749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.278 [2024-04-26 15:10:07.997767] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.278 [2024-04-26 15:10:07.997984] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.278 [2024-04-26 15:10:07.998229] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.278 [2024-04-26 15:10:07.998253] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.278 [2024-04-26 15:10:07.998269] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.278 [2024-04-26 15:10:08.001543] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.278 [2024-04-26 15:10:08.010377] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.278 [2024-04-26 15:10:08.010922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.278 [2024-04-26 15:10:08.011135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.278 [2024-04-26 15:10:08.011167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.278 [2024-04-26 15:10:08.011185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.278 [2024-04-26 15:10:08.011406] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.278 [2024-04-26 15:10:08.011630] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.278 [2024-04-26 15:10:08.011653] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.278 [2024-04-26 15:10:08.011668] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.278 [2024-04-26 15:10:08.014984] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.537 [2024-04-26 15:10:08.024027] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.537 [2024-04-26 15:10:08.024491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.537 [2024-04-26 15:10:08.024651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.537 [2024-04-26 15:10:08.024675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.537 [2024-04-26 15:10:08.024691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.537 [2024-04-26 15:10:08.024901] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.537 [2024-04-26 15:10:08.025140] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.537 [2024-04-26 15:10:08.025162] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.537 [2024-04-26 15:10:08.025177] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.537 [2024-04-26 15:10:08.028430] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.537 [2024-04-26 15:10:08.037592] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.537 [2024-04-26 15:10:08.038068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.537 [2024-04-26 15:10:08.038227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.537 [2024-04-26 15:10:08.038253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.538 [2024-04-26 15:10:08.038269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.538 [2024-04-26 15:10:08.038483] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.538 [2024-04-26 15:10:08.038700] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.538 [2024-04-26 15:10:08.038722] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.538 [2024-04-26 15:10:08.038735] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.538 [2024-04-26 15:10:08.041942] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.538 15:10:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:22.538 15:10:08 -- common/autotest_common.sh@850 -- # return 0 00:29:22.538 15:10:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:22.538 15:10:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:22.538 15:10:08 -- common/autotest_common.sh@10 -- # set +x 00:29:22.538 [2024-04-26 15:10:08.051104] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.538 [2024-04-26 15:10:08.051487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.538 [2024-04-26 15:10:08.051677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.538 [2024-04-26 15:10:08.051702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.538 [2024-04-26 15:10:08.051717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.538 [2024-04-26 15:10:08.051924] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.538 [2024-04-26 15:10:08.052173] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.538 [2024-04-26 15:10:08.052196] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.538 [2024-04-26 15:10:08.052210] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.538 [2024-04-26 15:10:08.055440] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.538 [2024-04-26 15:10:08.064571] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.538 [2024-04-26 15:10:08.064985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.538 [2024-04-26 15:10:08.065136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.538 [2024-04-26 15:10:08.065163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.538 [2024-04-26 15:10:08.065179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.538 [2024-04-26 15:10:08.065406] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.538 [2024-04-26 15:10:08.065617] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.538 [2024-04-26 15:10:08.065637] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.538 [2024-04-26 15:10:08.065656] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.538 [2024-04-26 15:10:08.068794] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.538 15:10:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.538 15:10:08 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:22.538 15:10:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.538 15:10:08 -- common/autotest_common.sh@10 -- # set +x 00:29:22.538 [2024-04-26 15:10:08.074490] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.538 [2024-04-26 15:10:08.078158] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.538 [2024-04-26 15:10:08.078647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.538 [2024-04-26 15:10:08.078839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.538 [2024-04-26 15:10:08.078864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.538 [2024-04-26 15:10:08.078879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.538 [2024-04-26 15:10:08.079114] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.538 [2024-04-26 15:10:08.079347] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.538 [2024-04-26 15:10:08.079368] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.538 [2024-04-26 15:10:08.079381] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.538 15:10:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.538 15:10:08 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:22.538 15:10:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.538 15:10:08 -- common/autotest_common.sh@10 -- # set +x 00:29:22.538 [2024-04-26 15:10:08.082641] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.538 [2024-04-26 15:10:08.091625] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.538 [2024-04-26 15:10:08.092099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.538 [2024-04-26 15:10:08.092257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.538 [2024-04-26 15:10:08.092283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.538 [2024-04-26 15:10:08.092298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.538 [2024-04-26 15:10:08.092531] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.538 [2024-04-26 15:10:08.092735] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.538 [2024-04-26 15:10:08.092755] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.538 [2024-04-26 15:10:08.092767] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.538 [2024-04-26 15:10:08.095887] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.538 [2024-04-26 15:10:08.105080] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.538 [2024-04-26 15:10:08.105578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.538 [2024-04-26 15:10:08.105802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.538 [2024-04-26 15:10:08.105827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.538 [2024-04-26 15:10:08.105848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.538 [2024-04-26 15:10:08.106083] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.538 [2024-04-26 15:10:08.106301] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.538 [2024-04-26 15:10:08.106344] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.538 [2024-04-26 15:10:08.106358] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.538 [2024-04-26 15:10:08.109523] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.538 [2024-04-26 15:10:08.118637] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.538 [2024-04-26 15:10:08.119230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.538 [2024-04-26 15:10:08.119410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.538 [2024-04-26 15:10:08.119435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.538 [2024-04-26 15:10:08.119453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.538 [2024-04-26 15:10:08.119670] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.538 [2024-04-26 15:10:08.119886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.538 [2024-04-26 15:10:08.119907] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.538 [2024-04-26 15:10:08.119923] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.538 [2024-04-26 15:10:08.123145] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.538 Malloc0 00:29:22.538 15:10:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.538 15:10:08 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:22.538 15:10:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.538 15:10:08 -- common/autotest_common.sh@10 -- # set +x 00:29:22.538 15:10:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.538 15:10:08 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:22.538 15:10:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.538 15:10:08 -- common/autotest_common.sh@10 -- # set +x 00:29:22.538 [2024-04-26 15:10:08.132303] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.538 [2024-04-26 15:10:08.132799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.538 [2024-04-26 15:10:08.132979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.538 [2024-04-26 15:10:08.133034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2ce50 with addr=10.0.0.2, port=4420 00:29:22.538 [2024-04-26 15:10:08.133053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce50 is same with the state(5) to be set 00:29:22.538 [2024-04-26 15:10:08.133268] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2ce50 (9): Bad file descriptor 00:29:22.538 [2024-04-26 15:10:08.133497] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.538 [2024-04-26 15:10:08.133518] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.538 [2024-04-26 15:10:08.133532] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.538 [2024-04-26 15:10:08.136783] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.538 15:10:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.538 15:10:08 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:22.538 15:10:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.538 15:10:08 -- common/autotest_common.sh@10 -- # set +x 00:29:22.539 [2024-04-26 15:10:08.143210] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.539 [2024-04-26 15:10:08.145826] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:22.539 15:10:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.539 15:10:08 -- host/bdevperf.sh@38 -- # wait 3903000 00:29:22.539 [2024-04-26 15:10:08.180730] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:32.508 00:29:32.508 Latency(us) 00:29:32.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.508 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:32.508 Verification LBA range: start 0x0 length 0x4000 00:29:32.508 Nvme1n1 : 15.01 6642.88 25.95 8473.72 0.00 8442.93 837.40 17185.00 00:29:32.508 =================================================================================================================== 00:29:32.508 Total : 6642.88 25.95 8473.72 0.00 8442.93 837.40 17185.00 00:29:32.508 15:10:17 -- host/bdevperf.sh@39 -- # sync 00:29:32.508 15:10:17 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:32.508 15:10:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.508 15:10:17 -- common/autotest_common.sh@10 -- # set +x 00:29:32.508 15:10:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.508 15:10:17 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:32.508 15:10:17 -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:32.508 15:10:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:32.508 15:10:17 -- nvmf/common.sh@117 -- # sync 00:29:32.508 15:10:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:32.508 15:10:17 -- nvmf/common.sh@120 -- # set +e 00:29:32.508 15:10:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:32.508 15:10:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:32.508 rmmod nvme_tcp 00:29:32.508 rmmod nvme_fabrics 00:29:32.508 rmmod nvme_keyring 00:29:32.508 15:10:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:32.508 15:10:17 -- nvmf/common.sh@124 -- # set -e 00:29:32.508 15:10:17 -- nvmf/common.sh@125 -- # return 0 00:29:32.508 15:10:17 -- nvmf/common.sh@478 -- # '[' -n 3903784 ']' 00:29:32.508 15:10:17 -- nvmf/common.sh@479 -- # killprocess 3903784 00:29:32.508 15:10:17 -- common/autotest_common.sh@936 -- # '[' -z 3903784 ']' 00:29:32.508 15:10:17 -- common/autotest_common.sh@940 -- # kill -0 3903784 00:29:32.508 15:10:17 -- common/autotest_common.sh@941 -- # uname 00:29:32.508 15:10:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:32.508 15:10:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3903784 00:29:32.508 15:10:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:32.508 15:10:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:32.508 15:10:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3903784' 00:29:32.508 killing process with pid 3903784 00:29:32.508 15:10:17 -- common/autotest_common.sh@955 -- # kill 3903784 00:29:32.508 15:10:17 -- common/autotest_common.sh@960 -- # wait 3903784 00:29:32.508 15:10:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:32.508 15:10:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:32.508 15:10:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:32.508 15:10:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:32.508 15:10:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:32.508 15:10:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.508 15:10:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:32.508 15:10:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.413 15:10:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:34.413 00:29:34.413 real 0m22.285s 00:29:34.413 user 0m59.308s 00:29:34.413 sys 0m4.464s 00:29:34.413 15:10:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:34.413 15:10:19 -- common/autotest_common.sh@10 -- # set +x 00:29:34.413 ************************************ 00:29:34.413 END TEST nvmf_bdevperf 00:29:34.413 ************************************ 00:29:34.413 15:10:19 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:34.413 15:10:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:34.413 15:10:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:34.413 15:10:19 -- common/autotest_common.sh@10 -- # set +x 00:29:34.413 ************************************ 00:29:34.413 START TEST nvmf_target_disconnect 00:29:34.413 ************************************ 00:29:34.413 15:10:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:34.413 * Looking for test storage... 00:29:34.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:34.413 15:10:19 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:34.413 15:10:19 -- nvmf/common.sh@7 -- # uname -s 00:29:34.413 15:10:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:34.413 15:10:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:34.413 15:10:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:34.413 15:10:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:34.413 15:10:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:34.413 15:10:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:34.413 15:10:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:34.413 15:10:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:34.413 15:10:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:34.413 15:10:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:34.413 15:10:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:34.413 15:10:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:34.413 15:10:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:34.413 15:10:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:34.413 15:10:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:34.413 15:10:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:34.413 15:10:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:34.413 15:10:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:34.413 15:10:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:34.413 15:10:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:34.413 15:10:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.413 15:10:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.413 15:10:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.413 15:10:19 -- paths/export.sh@5 -- # export PATH 00:29:34.413 15:10:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.413 15:10:19 -- nvmf/common.sh@47 -- # : 0 00:29:34.413 15:10:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:34.413 15:10:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:34.413 15:10:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:34.413 15:10:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:34.413 15:10:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:34.413 15:10:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:34.413 15:10:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:34.413 15:10:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:34.413 15:10:19 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:34.413 15:10:19 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:34.413 15:10:19 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:34.413 15:10:19 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:29:34.413 15:10:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:34.413 15:10:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:34.413 15:10:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:34.413 15:10:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:34.413 15:10:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:34.413 15:10:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.413 15:10:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:34.413 15:10:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.413 15:10:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:29:34.413 15:10:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:34.413 15:10:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:34.413 15:10:19 -- common/autotest_common.sh@10 -- # set +x 00:29:36.310 15:10:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:36.310 15:10:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:36.310 15:10:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:36.310 15:10:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:36.310 15:10:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:36.310 15:10:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:36.310 15:10:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:36.310 15:10:21 -- nvmf/common.sh@295 -- # net_devs=() 00:29:36.310 15:10:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:36.310 15:10:21 -- nvmf/common.sh@296 -- # e810=() 00:29:36.310 15:10:21 -- nvmf/common.sh@296 -- # local -ga e810 00:29:36.310 15:10:21 -- nvmf/common.sh@297 -- # x722=() 00:29:36.310 15:10:21 -- nvmf/common.sh@297 -- # local -ga x722 00:29:36.310 15:10:21 -- nvmf/common.sh@298 -- # mlx=() 00:29:36.310 15:10:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:36.310 15:10:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.310 15:10:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.310 15:10:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.310 15:10:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.310 15:10:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.310 15:10:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.310 15:10:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.310 15:10:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.310 15:10:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.310 15:10:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.310 15:10:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.310 15:10:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:36.310 15:10:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:36.310 15:10:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:36.310 15:10:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:36.310 15:10:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:36.310 15:10:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:36.310 15:10:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:36.311 15:10:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:36.311 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:36.311 15:10:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:36.311 15:10:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:36.311 15:10:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.311 15:10:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.311 15:10:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:36.311 15:10:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:36.311 15:10:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:36.311 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:36.311 15:10:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:36.311 15:10:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:36.311 15:10:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.311 15:10:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.311 15:10:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:36.311 15:10:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:36.311 15:10:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:36.311 15:10:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:36.311 15:10:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:36.311 15:10:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.311 15:10:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:36.311 15:10:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.311 15:10:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:36.311 Found net devices under 0000:84:00.0: cvl_0_0 00:29:36.311 15:10:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.311 15:10:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:36.311 15:10:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.311 15:10:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:36.311 15:10:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.311 15:10:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:36.311 Found net devices under 0000:84:00.1: cvl_0_1 00:29:36.311 15:10:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.311 15:10:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:36.311 15:10:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:36.311 15:10:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:36.311 15:10:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:36.311 15:10:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:36.311 15:10:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:36.311 15:10:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:36.311 15:10:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.311 15:10:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:36.311 15:10:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:36.311 15:10:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:36.311 15:10:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:36.311 15:10:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:36.311 15:10:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:36.311 15:10:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:36.311 15:10:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:36.311 15:10:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:36.311 15:10:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:36.311 15:10:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:36.311 15:10:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:36.311 15:10:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:36.311 15:10:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:36.311 15:10:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:36.311 15:10:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:36.311 15:10:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:36.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:29:36.311 00:29:36.311 --- 10.0.0.2 ping statistics --- 00:29:36.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.311 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:29:36.311 15:10:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:29:36.568 00:29:36.568 --- 10.0.0.1 ping statistics --- 00:29:36.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.568 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:29:36.568 15:10:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.568 15:10:22 -- nvmf/common.sh@411 -- # return 0 00:29:36.568 15:10:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:36.568 15:10:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.568 15:10:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:36.568 15:10:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:36.568 15:10:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.568 15:10:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:36.568 15:10:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:36.568 15:10:22 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:36.568 15:10:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:36.568 15:10:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:36.568 15:10:22 -- common/autotest_common.sh@10 -- # set +x 00:29:36.568 ************************************ 00:29:36.568 START TEST nvmf_target_disconnect_tc1 00:29:36.568 ************************************ 00:29:36.568 15:10:22 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:29:36.568 15:10:22 -- host/target_disconnect.sh@32 -- # set +e 00:29:36.568 15:10:22 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:36.568 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.568 [2024-04-26 15:10:22.266881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.568 [2024-04-26 15:10:22.267144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.568 [2024-04-26 15:10:22.267171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x51e4e0 with addr=10.0.0.2, port=4420 00:29:36.568 [2024-04-26 15:10:22.267202] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:36.568 [2024-04-26 15:10:22.267225] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:36.568 [2024-04-26 15:10:22.267239] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:36.568 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:36.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:36.568 Initializing NVMe Controllers 00:29:36.568 15:10:22 -- host/target_disconnect.sh@33 -- # trap - ERR 00:29:36.568 15:10:22 -- host/target_disconnect.sh@33 -- # print_backtrace 00:29:36.568 15:10:22 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:29:36.568 15:10:22 -- common/autotest_common.sh@1139 -- # return 0 00:29:36.568 15:10:22 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:29:36.568 15:10:22 -- host/target_disconnect.sh@41 -- # set -e 00:29:36.568 00:29:36.568 real 0m0.095s 00:29:36.568 user 0m0.039s 00:29:36.568 sys 0m0.055s 00:29:36.568 15:10:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:36.568 15:10:22 -- common/autotest_common.sh@10 -- # set +x 00:29:36.568 ************************************ 00:29:36.568 END TEST nvmf_target_disconnect_tc1 00:29:36.568 ************************************ 00:29:36.568 15:10:22 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:36.568 15:10:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:36.568 15:10:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:36.568 15:10:22 -- common/autotest_common.sh@10 -- # set +x 00:29:36.825 ************************************ 00:29:36.825 START TEST nvmf_target_disconnect_tc2 00:29:36.825 ************************************ 00:29:36.825 15:10:22 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:29:36.825 15:10:22 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:29:36.825 15:10:22 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:36.825 15:10:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:36.825 15:10:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:36.825 15:10:22 -- common/autotest_common.sh@10 -- # set +x 00:29:36.825 15:10:22 -- nvmf/common.sh@470 -- # nvmfpid=3906966 00:29:36.825 15:10:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:36.825 15:10:22 -- nvmf/common.sh@471 -- # waitforlisten 3906966 00:29:36.825 15:10:22 -- common/autotest_common.sh@817 -- # '[' -z 3906966 ']' 00:29:36.825 15:10:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.825 15:10:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:36.825 15:10:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.825 15:10:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:36.825 15:10:22 -- common/autotest_common.sh@10 -- # set +x 00:29:36.825 [2024-04-26 15:10:22.449968] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:29:36.825 [2024-04-26 15:10:22.450081] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.825 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.825 [2024-04-26 15:10:22.487306] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:36.825 [2024-04-26 15:10:22.513578] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:37.083 [2024-04-26 15:10:22.596672] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.083 [2024-04-26 15:10:22.596730] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.083 [2024-04-26 15:10:22.596754] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.083 [2024-04-26 15:10:22.596764] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.083 [2024-04-26 15:10:22.596773] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.083 [2024-04-26 15:10:22.596915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:37.083 [2024-04-26 15:10:22.597060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:37.083 [2024-04-26 15:10:22.597164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:37.083 [2024-04-26 15:10:22.597167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:37.083 15:10:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:37.083 15:10:22 -- common/autotest_common.sh@850 -- # return 0 00:29:37.083 15:10:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:37.083 15:10:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:37.083 15:10:22 -- common/autotest_common.sh@10 -- # set +x 00:29:37.083 15:10:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.083 15:10:22 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:37.083 15:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:37.083 15:10:22 -- common/autotest_common.sh@10 -- # set +x 00:29:37.083 Malloc0 00:29:37.083 15:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:37.083 15:10:22 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:37.083 15:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:37.083 15:10:22 -- common/autotest_common.sh@10 -- # set +x 00:29:37.083 [2024-04-26 15:10:22.769977] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.083 15:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:37.083 15:10:22 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:37.083 15:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:37.083 15:10:22 -- common/autotest_common.sh@10 -- # set +x 00:29:37.083 15:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:37.083 15:10:22 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:37.083 15:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:37.083 15:10:22 -- common/autotest_common.sh@10 -- # set +x 00:29:37.083 15:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:37.083 15:10:22 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:37.083 15:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:37.083 15:10:22 -- common/autotest_common.sh@10 -- # set +x 00:29:37.083 [2024-04-26 15:10:22.798269] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.083 15:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:37.083 15:10:22 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:37.083 15:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:37.083 15:10:22 -- common/autotest_common.sh@10 -- # set +x 00:29:37.083 15:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:37.083 15:10:22 -- host/target_disconnect.sh@50 -- # reconnectpid=3906994 00:29:37.083 15:10:22 -- host/target_disconnect.sh@52 -- # sleep 2 00:29:37.083 15:10:22 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:37.340 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.256 15:10:24 -- host/target_disconnect.sh@53 -- # kill -9 3906966 00:29:39.256 15:10:24 -- host/target_disconnect.sh@55 -- # sleep 2 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 [2024-04-26 15:10:24.823108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Read completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.256 Write completed with error (sct=0, sc=8) 00:29:39.256 starting I/O failed 00:29:39.257 Write completed with error (sct=0, sc=8) 00:29:39.257 starting I/O failed 00:29:39.257 Read completed with error (sct=0, sc=8) 00:29:39.257 starting I/O failed 00:29:39.257 Read completed with error (sct=0, sc=8) 00:29:39.257 starting I/O failed 00:29:39.257 Read completed with error (sct=0, sc=8) 00:29:39.257 starting I/O failed 00:29:39.257 Write completed with error (sct=0, sc=8) 00:29:39.257 starting I/O failed 00:29:39.257 [2024-04-26 15:10:24.823449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.257 [2024-04-26 15:10:24.823672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.823842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.823870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.824043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.824156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.824182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.824286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.824461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.824489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.824649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.824790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.824818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.824995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.825131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.825157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.825264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.825427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.825455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.825635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.825818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.825879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.826070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.826197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.826223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.826351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.826561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.826618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.826767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.826905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.826933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.827082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.827216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.827242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.827443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.827636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.827681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.827856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.828040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.828084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.828221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.828476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.828532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.828773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.828972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.829000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.829138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.829243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.829269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.829441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.829631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.829654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.829833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.830080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.830106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.830215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.830337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.830374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.830501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.830685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.830725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.830916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.831098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.831125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.831257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.831396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.831435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.831707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.831843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.831872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.832047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.832191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.832217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.832360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.832487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.832515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.832659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.832874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.832902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.833069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.833202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.833227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-04-26 15:10:24.833402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-04-26 15:10:24.833736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.833795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.833951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.834094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.834120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.834225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.834443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.834508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.834741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.834926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.834955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.835112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.835270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.835310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.835473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.835657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.835686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.835860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.836035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.836083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.836202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.836302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.836340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.837154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.837347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.837405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.837559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.837667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.837695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.837878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.838052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.838095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.838235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.838445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.838502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.838676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.838821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.838850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.839032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.839211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.839240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.839349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.839523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.839551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.839694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.839838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.839866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.840044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.840256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.840284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.840539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.840751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.840803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.841003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.841270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.841300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.841536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.841660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.841690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.841899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.842065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.842093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.842214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.842416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.842444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.842633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.842833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.842862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.843012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.843266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.843295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.843441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.843663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.843720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.843869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.844027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.844063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.844281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.844537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.844588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.844847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.845014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.845062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.845237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.845479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.258 [2024-04-26 15:10:24.845540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.258 qpair failed and we were unable to recover it. 00:29:39.258 [2024-04-26 15:10:24.845748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.845940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.845968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.846115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.846242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.846266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.846388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.846540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.846578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.846800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.847052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.847083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.847341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.847608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.847661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.847897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.848037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.848066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.848272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.848462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.848512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.848752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.848970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.848998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.849142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.849240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.849263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.849457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.849649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.849701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.849871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.850085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.850137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.850303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.850512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.850567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.850696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.850947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.850976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.851120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.851394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.851445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.851637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.851812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.851869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.852087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.852266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.852295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.852471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.852635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.852664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.852823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.853081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.853111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.853366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.853631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.853680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.853868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.854057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.854116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.854386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.854616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.854668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.854791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.854969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.854997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.855231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.855414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.855469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.855720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.855894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.855933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.856221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.856486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.856543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.856765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.856973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.857002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.857253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.857513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.857561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.857830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.858043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.858072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.259 qpair failed and we were unable to recover it. 00:29:39.259 [2024-04-26 15:10:24.858227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.259 [2024-04-26 15:10:24.858373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.858401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.858547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.858802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.858862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.859071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.859243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.859294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.859421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.859593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.859621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.859855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.860053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.860082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.860221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.860366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.860394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.860590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.860703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.860731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.860863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.861053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.861095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.861333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.861529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.861586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.861791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.862040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.862070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.862271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.862484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.862536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.862694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.862854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.862882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.863079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.863308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.863358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.863494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.863710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.863733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.863996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.864253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.864282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.864413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.864566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.864593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.864781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.864997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.865050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.865211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.865477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.865525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.865709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.865897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.865925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.866107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.866309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.866339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.866580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.866778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.866829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.867086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.867351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.867401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.867613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.867845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.867899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.868155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.868384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.868434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.868632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.868910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.868960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.869214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.869441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.869490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.869646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.869901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.869928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.870082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.870287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.870349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.870552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.870727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.870779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.260 qpair failed and we were unable to recover it. 00:29:39.260 [2024-04-26 15:10:24.870997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.260 [2024-04-26 15:10:24.871153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.871183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.871436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.871577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.871629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.871781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.872050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.872080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.872333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.872561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.872617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.872829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.873041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.873071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.873305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.873535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.873586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.873813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.874077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.874111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.874319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.874476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.874536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.874656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.874789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.874811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.874986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.875219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.875250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.875456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.875676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.875726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.875952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.876125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.876155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.876404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.876607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.876655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.876857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.877077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.877107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.877355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.877556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.877605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.877784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.878004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.878042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.878244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.878502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.878551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.878776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.879037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.879071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.879272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.879475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.879526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.879747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.879993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.880037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.880190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.880341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.880364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.880577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.880753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.261 [2024-04-26 15:10:24.880806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.261 qpair failed and we were unable to recover it. 00:29:39.261 [2024-04-26 15:10:24.881059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.881237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.881267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.881424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.881631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.881654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.881919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.882128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.882158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.882362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.882562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.882613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.882842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.883101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.883132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.883396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.883614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.883663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.883905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.884029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.884058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.884244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.884472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.884524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.884750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.884970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.885000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.885204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.885367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.885427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.885664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.885917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.885966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.886213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.886452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.886503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.886741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.887003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.887044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.887274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.887492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.887543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.887768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.887993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.888035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.888283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.888489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.888540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.888791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.889032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.889062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.889219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.889487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.889536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.889753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.889936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.889972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.890245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.890478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.890527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.890685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.890841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.890869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.891048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.891327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.891387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.891643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.891905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.891956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.892230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.892458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.892510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.892727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.892967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.892996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.893194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.893444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.893495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.893717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.893939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.893968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.894207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.894464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.894516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.262 [2024-04-26 15:10:24.894712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.894867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.262 [2024-04-26 15:10:24.894895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.262 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.895166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.895415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.895466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.895689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.895941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.895970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.896183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.896401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.896453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.896600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.896831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.896884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.897119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.897387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.897439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.897660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.897874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.897903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.898130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.898395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.898450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.898701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.898872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.898902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.899128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.899283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.899311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.899530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.899689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.899747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.899969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.900197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.900227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.900426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.900605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.900657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.900802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.900952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.900980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.901259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.901480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.901530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.901657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.901801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.901824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.902091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.902374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.902428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.902650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.902788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.902854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.903094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.903313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.903342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.903496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.903776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.903829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.904094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.904330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.904391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.904587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.904822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.904871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.905126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.905352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.905411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.905615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.905844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.905893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.906146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.906374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.906428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.906628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.906849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.906899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.263 qpair failed and we were unable to recover it. 00:29:39.263 [2024-04-26 15:10:24.907131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.907352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.263 [2024-04-26 15:10:24.907402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.907653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.907871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.907899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.908133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.908321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.908350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.908604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.908834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.908885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.909015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.909188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.909228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.909467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.909644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.909694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.909946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.910073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.910102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.910261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.910405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.910444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.910658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.910863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.910892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.911147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.911321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.911397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.911625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.911874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.911904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.912165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.912433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.912484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.912705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.912865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.912893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.913154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.913376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.913430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.913692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.913905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.913934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.914194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.914448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.914499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.914753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.915002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.915044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.915264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.915426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.915480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.915724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.915940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.915969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.916223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.916380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.916438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.916693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.916912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.916940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.917205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.917432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.917483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.917690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.917947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.917976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.918204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.918366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.918431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.918635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.918803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.918860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.919096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.919365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.919416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.919675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.919892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.919940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.920192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.920390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.920440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.264 qpair failed and we were unable to recover it. 00:29:39.264 [2024-04-26 15:10:24.920659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.264 [2024-04-26 15:10:24.920922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.920973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.921229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.921397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.921447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.921610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.921876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.921926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.922192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.922401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.922452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.922667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.922882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.922916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.923166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.923435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.923487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.923711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.923916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.923945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.924101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.924344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.924408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.924671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.924930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.924959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.925169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.925348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.925372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.925645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.925877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.925926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.926177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.926347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.926398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.926655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.926914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.926965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.927193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.927429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.927477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.927699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.927922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.927949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.928187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.928423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.928480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.928684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.928873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.928899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.929120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.929339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.929389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.929599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.929845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.929900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.930096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.930326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.930353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.930528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.930733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.930786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.930993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.931380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.931433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.931689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.931853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.931881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.932051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.932209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.932238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.932402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.932625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.265 [2024-04-26 15:10:24.932659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.265 qpair failed and we were unable to recover it. 00:29:39.265 [2024-04-26 15:10:24.932833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.932993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.933028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.933213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.933394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.933441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.933605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.933800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.933833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.933999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.934159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.934187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.934331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.934556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.934601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.934771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.934963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.934991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.935148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.935417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.935446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.935622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.935817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.935845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.936034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.936212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.936241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.936426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.936577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.936605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.936773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.936925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.936954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.937080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.937281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.937306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.937443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.937670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.937698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.937880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.938033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.938062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.938241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.938361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.938386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.938544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.938695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.938724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.938839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.939013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.939055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.939229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.939349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.939387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.939567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.939716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.939744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.939897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.940059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.940088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.940242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.940385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.940415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.940579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.940752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.940781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.940899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.941048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.941078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.941225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.941363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.941388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.941554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.941677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.941705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.941855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.942034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.942088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.942255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.942363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.942404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.942578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.942723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.942751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.266 qpair failed and we were unable to recover it. 00:29:39.266 [2024-04-26 15:10:24.942900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-04-26 15:10:24.943036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.943069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.943206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.943319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.943343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.943478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.943631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.943659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.943814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.943960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.943988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.944199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.944305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.944347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.944500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.944614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.944642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.944812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.944957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.944985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.945121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.945269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.945295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.945487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.945629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.945657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.945829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.945979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.946007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.946171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.946289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.946315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.946480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.946626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.946655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.946794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.946933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.946962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.947115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.947261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.947286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.947438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.947587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.947616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.947788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.947912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.947940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.948103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.948247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.948288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.948436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.948574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.948602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.948771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.948883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.948912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.949054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.949166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.949191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.949333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.949495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.949524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.949645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.949777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.949805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.949984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.950118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.950159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.950314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.950424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.950452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.950597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.950765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.950793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.950961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.951120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.951147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.951282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.951440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.951468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.951635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.951772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.951801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.951972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.952121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.952147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.267 qpair failed and we were unable to recover it. 00:29:39.267 [2024-04-26 15:10:24.952284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-04-26 15:10:24.952466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.952494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.952661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.952764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.952792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.952932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.953073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.953098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.953267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.953376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.953405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.953537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.953695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.953724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.953843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.954032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.954062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.954230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.954338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.954366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.954506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.954675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.954704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.954835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.955004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.955045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.955210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.955378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.955406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.955551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.955695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.955724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.955886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.956029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.956058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.956181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.956344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.956372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.956542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.956708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.956736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.956895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.957038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.957086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.957195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.957343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.957371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.957510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.957676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.957705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.957857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.957999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.958038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.958237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.958342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.958370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.958540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.958706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.958734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.958883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.959060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.959086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.959243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.959350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.959378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.959517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.959618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.959646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.959819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.959950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.959975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.960116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.960280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.960308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.960464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.960575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.960604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.960755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.960868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.960893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.961055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.961157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.961186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.961296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.961437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.961465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.268 qpair failed and we were unable to recover it. 00:29:39.268 [2024-04-26 15:10:24.961592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-04-26 15:10:24.961751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.961776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.961960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.962136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.962163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.962279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.962457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.962485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.962664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.962798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.962822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.962968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.963075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.963105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.963246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.963392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.963420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.963579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.963679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.963704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.963877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.963991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.964027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.964177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.964281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.964309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.964479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.964608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.964632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.964781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.964914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.964943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.965048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.965155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.965183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.965337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.965498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.965539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.965710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.965878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.965906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.966050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.966160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.966188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.966335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.966516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.966559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.966703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.966848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.966876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.967028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.967141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.967170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.967334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.967474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.967499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.967624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.967762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.967789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.967899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.968051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.968086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.968248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.968390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.968431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.968573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.968714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.968742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.968890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.969037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.969067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.969196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.969359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.969384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.969535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.969704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.969733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.969875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.969987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.970016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.970147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.970282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.970308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.970442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.970549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.970577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.269 qpair failed and we were unable to recover it. 00:29:39.269 [2024-04-26 15:10:24.970748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-04-26 15:10:24.970884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.970913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.971039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.971148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.971174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.971290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.971418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.971446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.971586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.971697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.971725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.971907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.972016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.972062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.972216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.972318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.972347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.972454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.972567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.972595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.972711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.972905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.972934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.973103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.973208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.973234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.973348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.973467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.973496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.973646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.973762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.973786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.973934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.974052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.974081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.974219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.974359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.974387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.974528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.974692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.974718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.974870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.974994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.975047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.975171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.975310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.975338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.975493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.975665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.975708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.978356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.978541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.978573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.978708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.978819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.978849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.979045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.979155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.979181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.979285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.979506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.270 [2024-04-26 15:10:24.979535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.270 qpair failed and we were unable to recover it. 00:29:39.270 [2024-04-26 15:10:24.979771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.557 [2024-04-26 15:10:24.979950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.557 [2024-04-26 15:10:24.979979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.557 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.980162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.980330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.980371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.980504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.980625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.980653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.980774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.980905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.980933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.981090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.981219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.981245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.981411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.981556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.981584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.981694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.981850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.981879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.982045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.982201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.982244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.982391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.982535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.982563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.982738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.982882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.982910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.983088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.983217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.983259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.983462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.983571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.983599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.983777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.983914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.983942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.984108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.984212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.984238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.984411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.984524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.984552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.984745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.984887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.984916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.985032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.985183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.985209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.985323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.985464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.985493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.985682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.985826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.985854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.985961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.986094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.986122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.986264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.986410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.986439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.986579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.986704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.986732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.986903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.987074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.987101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.987239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.987403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.987432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.987569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.987755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.987784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.987962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.988078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.988105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.988220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.988326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.988355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.988476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.988621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.988654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.988854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.988975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.558 [2024-04-26 15:10:24.989003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.558 qpair failed and we were unable to recover it. 00:29:39.558 [2024-04-26 15:10:24.989138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.989242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.989268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.989399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.989512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.989540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.989730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.989838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.989862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.990048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.990180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.990208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.990393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.990565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.990593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.990734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.990904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.990944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.991112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.991219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.991248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.991416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.991560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.991589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.991694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.991825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.991849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.991970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.992088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.992118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.992265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.992363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.992392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.992507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.992648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.992673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.992792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.992940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.992968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.993124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.993243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.993271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.993415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.993560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.993584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.993726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.993867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.993896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.994006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.994170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.994199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.994366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.994497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.994520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.994653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.994764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.994793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.994912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.995031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.995070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.995192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.995383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.995405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.995660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.995850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.995880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.996089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.996199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.996228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.996400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.996635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.996686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.559 [2024-04-26 15:10:24.996932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.997083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.559 [2024-04-26 15:10:24.997113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.559 qpair failed and we were unable to recover it. 00:29:39.560 [2024-04-26 15:10:24.997261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:24.997499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:24.997551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.560 qpair failed and we were unable to recover it. 00:29:39.560 [2024-04-26 15:10:24.997791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:24.997951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:24.997979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.560 qpair failed and we were unable to recover it. 00:29:39.560 [2024-04-26 15:10:24.998134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:24.998280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:24.998320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.560 qpair failed and we were unable to recover it. 00:29:39.560 [2024-04-26 15:10:24.998494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:24.998744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:24.998795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.560 qpair failed and we were unable to recover it. 00:29:39.560 [2024-04-26 15:10:24.998982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:24.999103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:24.999130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.560 qpair failed and we were unable to recover it. 00:29:39.560 [2024-04-26 15:10:24.999318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:24.999438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:24.999476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.560 qpair failed and we were unable to recover it. 00:29:39.560 [2024-04-26 15:10:24.999667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:24.999850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:24.999880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.560 qpair failed and we were unable to recover it. 00:29:39.560 [2024-04-26 15:10:25.000112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:25.000277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:25.000306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.560 qpair failed and we were unable to recover it. 00:29:39.560 [2024-04-26 15:10:25.000447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:25.000620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:25.000649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.560 qpair failed and we were unable to recover it. 00:29:39.560 [2024-04-26 15:10:25.000889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:25.001047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:25.001076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.560 qpair failed and we were unable to recover it. 00:29:39.560 [2024-04-26 15:10:25.001257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:25.001509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:25.001539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.560 qpair failed and we were unable to recover it. 00:29:39.560 [2024-04-26 15:10:25.001828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:25.001973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.560 [2024-04-26 15:10:25.002001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:39.560 qpair failed and we were unable to recover it. 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 [2024-04-26 15:10:25.002367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Write completed with error (sct=0, sc=8) 00:29:39.560 starting I/O failed 00:29:39.560 Read completed with error (sct=0, sc=8) 00:29:39.561 starting I/O failed 00:29:39.561 Write completed with error (sct=0, sc=8) 00:29:39.561 starting I/O failed 00:29:39.561 Read completed with error (sct=0, sc=8) 00:29:39.561 starting I/O failed 00:29:39.561 [2024-04-26 15:10:25.002680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.561 [2024-04-26 15:10:25.002865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b20a0 is same with the state(5) to be set 00:29:39.561 [2024-04-26 15:10:25.003205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.003493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.003526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.003811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.004081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.004109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.004227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.004395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.004424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.004633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.004790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.004848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.005078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.005232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.005259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.005493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.005708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.005759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.005972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.006152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.006178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.006360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.006582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.006630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.006857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.007106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.007132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.007256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.007418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.007459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.007680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.007867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.007895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.008035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.008232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.008258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.008386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.008554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.008577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.008752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.008890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.008919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.009109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.009257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.009284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.009533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.009765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.009794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.009983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.010141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.010167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.010358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.010539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.010596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.010853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.011078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.011106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.011226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.011381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.011410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.011562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.011737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.011766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.011880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.012035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.012078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.012237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.012361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.012390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.012586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.012722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.012751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.012935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.013089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.013116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.013250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.013408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.013437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.013614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.013788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.561 [2024-04-26 15:10:25.013817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.561 qpair failed and we were unable to recover it. 00:29:39.561 [2024-04-26 15:10:25.013989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.014155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.014182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.014294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.014423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.014452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.014599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.014735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.014764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.014897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.015072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.015099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.015266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.015389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.015417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.015598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.015710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.015739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.015895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.016032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.016077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.016187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.016378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.016408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.016608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.016822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.016851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.017094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.017234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.017260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.017445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.017582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.017610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.017774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.017926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.017955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.018134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.018275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.018319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.018512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.018664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.018720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.018880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.019065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.019095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.019220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.019402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.019425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.019605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.019757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.019786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.019949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.020114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.020144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.020267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.020396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.020420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.020603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.020737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.020766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.020916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.021039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.021070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.021193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.021387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.021410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.021524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.021669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.021698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.562 qpair failed and we were unable to recover it. 00:29:39.562 [2024-04-26 15:10:25.021899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.022034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.562 [2024-04-26 15:10:25.022065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.022250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.022397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.022446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.022602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.022762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.022802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.022999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.023157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.023185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.023371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.023506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.023529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.023689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.023827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.023855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.024010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.024154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.024183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.024341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.024492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.024531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.024673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.024812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.024841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.024989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.025120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.025150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.025354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.025505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.025534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.025692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.025881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.025910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.026070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.026207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.026232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.026381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.026502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.026526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.026688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.026826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.026855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.026974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.027112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.027142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.027283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.027467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.027509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.027659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.027803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.027832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.027997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.028147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.028176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.028335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.028493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.028532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.028710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.028852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.028881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.028998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.029146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.029175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.029347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.029543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.029572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.029743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.029881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.029910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.030058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.030229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.030259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.030435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.030573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.030612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.030785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.030954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.030983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.031110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.031252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.031281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.563 [2024-04-26 15:10:25.031445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.031605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.563 [2024-04-26 15:10:25.031646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.563 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.031794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.031934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.031963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.032118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.032288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.032317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.032474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.032607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.032631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.032752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.032898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.032927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.033064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.033205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.033234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.033377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.033499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.033522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.033707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.033848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.033877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.034034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.034145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.034175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.034351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.034483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.034521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.034660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.034830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.034859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.034973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.035133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.035158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.035262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.035444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.035484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.035604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.035752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.035781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.035921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.036091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.036121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.036273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.036441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.036478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.036620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.036788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.036817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.036956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.037099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.037129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.037281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.037397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.037420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.037585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.037723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.037752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.037869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.038041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.038071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.038227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.038395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.038436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.038611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.038749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.038777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.038918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.039083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.039113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.039227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.039347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.039384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.039527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.039707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.039736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.039876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.040041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.040072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.040221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.040324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.040347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.040523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.040667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.040696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.564 qpair failed and we were unable to recover it. 00:29:39.564 [2024-04-26 15:10:25.040843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.564 [2024-04-26 15:10:25.040986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.041015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.041210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.041348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.041385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.041561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.041728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.041757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.041896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.042071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.042101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.042283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.042433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.042475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.042644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.042756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.042790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.042943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.043111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.043141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.043334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.043465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.043502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.043688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.043856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.043885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.044035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.044212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.044251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.044415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.044592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.044648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.044785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.044892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.044920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.045089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.045226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.045254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.045432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.045531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.045554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.045671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.045774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.045802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.045920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.046036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.046070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.046216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.046364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.046388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.046548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.046720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.046749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.046892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.047035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.047065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.047207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.047381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.047405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.047583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.047754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.047783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.047894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.048054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.048084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.048222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.048388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.048411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.048591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.048733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.048762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.048931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.049052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.049082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.049255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.049427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.049454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.049638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.049750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.049778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.049920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.050090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.050120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.050290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.050415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.050438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.565 qpair failed and we were unable to recover it. 00:29:39.565 [2024-04-26 15:10:25.050591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.565 [2024-04-26 15:10:25.050761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.050791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.050937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.051090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.051120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.051266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.051419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.051442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.051590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.051731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.051760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.051909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.052068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.052098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.052210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.052347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.052370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.052520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.052690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.052723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.052829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.052976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.053005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.053171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.053311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.053334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.053461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.053575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.053604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.053774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.053941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.053970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.054088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.054219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.054244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.054378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.054524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.054553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.054696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.054841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.054871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.055044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.055218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.055247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.055425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.055588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.055651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.055794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.055899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.055928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.056084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.056201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.056225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.056371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.056565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.056622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.056818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.056984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.057013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.057187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.057347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.057370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.057551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.057796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.057846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.058043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.058173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.058202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.058335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.058470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.058493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.058712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.058893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.058923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.059102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.059258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.059297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.059516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.059683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.059751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.059946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.060122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.060152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.060285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.060462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.060491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.566 qpair failed and we were unable to recover it. 00:29:39.566 [2024-04-26 15:10:25.060658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.566 [2024-04-26 15:10:25.060904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.060933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.061116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.061286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.061324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.061493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.061699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.061755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.061964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.062171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.062196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.062345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.062610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.062661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.062848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.063088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.063113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.063213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.063406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.063443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.063668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.063871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.063900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.064063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.064214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.064244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.064440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.064668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.064741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.064964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.065140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.065169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.065299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.065441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.065470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.065652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.065825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.065854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.066056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.066176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.066205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.066345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.066526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.066555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.066732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.066947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.066975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.067162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.067362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.067420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.067608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.067863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.067911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.068127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.068307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.068336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.068548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.068811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.068858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.068985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.069180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.069210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.069373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.069629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.069682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.069850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.069961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.069990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.070150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.070319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.070348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.567 qpair failed and we were unable to recover it. 00:29:39.567 [2024-04-26 15:10:25.070493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.070613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.567 [2024-04-26 15:10:25.070636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.070812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.070941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.070970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.071186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.071406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.071444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.071660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.071865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.071895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.072074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.072245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.072280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.072530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.072812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.072862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.073051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.073229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.073257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.073432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.073669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.073722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.073886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.074103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.074134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.074340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.074459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.074498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.074650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.074842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.074872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.075112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.075354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.075384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.075607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.075780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.075809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.075995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.076138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.076168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.076366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.076622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.076687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.076917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.077087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.077119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.077300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.077519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.077566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.077751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.077959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.077989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.078186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.078320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.078344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.078551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.078684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.078714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.078896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.079119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.079151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.079306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.079448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.079473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.079708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.079893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.079923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.080147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.080390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.080421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.080604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.080792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.080847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.081047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.081245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.081284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.081477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.081706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.081774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.081925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.082122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.082152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.082360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.082592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.082644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.568 qpair failed and we were unable to recover it. 00:29:39.568 [2024-04-26 15:10:25.082826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.083016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.568 [2024-04-26 15:10:25.083057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.083205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.083437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.083492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.083684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.083865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.083901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.084122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.084358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.084416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.084640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.084858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.084916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.085102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.085335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.085393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.085542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.085782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.085836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.086051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.086207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.086237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.086375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.086585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.086646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.086888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.087118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.087148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.087263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.087462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.087501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.087716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.087938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.087968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.088149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.088269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.088298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.088442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.088622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.088661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.088842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.089027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.089057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.089214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.089365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.089395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.089537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.089674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.089698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.089883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.090097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.090155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.090272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.090457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.090487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.090660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.090873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.090903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.091089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.091210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.091239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.091403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.091619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.091679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.091844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.091993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.092040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.569 qpair failed and we were unable to recover it. 00:29:39.569 [2024-04-26 15:10:25.092291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.092527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.569 [2024-04-26 15:10:25.092580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.092791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.092948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.092976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.093125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.093349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.093379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.093534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.093766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.093822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.094017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.094154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.094183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.094376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.094588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.094641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.094755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.095028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.095069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.095183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.095366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.095394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.095606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.095835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.095885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.096105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.096257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.096287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.096429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.096596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.096625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.096811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.097041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.097085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.097229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.097376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.097405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.097594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.097793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.097853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.098087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.098246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.098287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.098452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.098643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.098681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.098884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.099089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.099121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.099261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.099425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.099465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.099666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.099836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.099866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.100095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.100206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.100235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.100429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.100603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.100650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.100816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.101014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.101052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.101191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.101414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.101474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.101635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.101799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.101846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.101987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.102125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.102155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.102360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.102523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.102553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.570 [2024-04-26 15:10:25.102747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.102943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.570 [2024-04-26 15:10:25.102974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.570 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.103160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.103289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.103319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.103502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.103748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.103799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.103941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.104136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.104180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.104331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.104477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.104507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.104708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.104843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.104883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.105121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.105279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.105322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.105505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.105690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.105720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.105955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.106115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.106145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.106283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.106431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.106456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.106683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.106874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.106904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.107102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.107224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.107253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.107376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.107567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.107608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.107773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.107995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.108035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.108165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.108308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.108339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.108527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.108651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.108691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.108940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.109140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.109175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.109373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.109597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.109665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.109892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.110104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.110134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.110255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.110483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.110541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.110755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.110942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.110972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.111137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.111296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.111338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.111485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.111717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.111773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.111997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.112159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.112189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.112372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.112610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.112657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.112877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.113100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.113131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.113341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.113580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.113633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.113871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.114053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.114098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.114219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.114334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.114362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.571 [2024-04-26 15:10:25.114594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.114789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.571 [2024-04-26 15:10:25.114819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.571 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.115033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.115192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.115248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.115476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.115690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.115724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.115924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.116123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.116154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.116396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.116548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.116602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.116796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.116985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.117016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.117245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.117432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.117480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.117645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.117846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.117899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.118084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.118286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.118315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.118534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.118762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.118812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.118981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.119151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.119180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.119384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.119570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.119617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.119829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.119988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.120053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.120251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.120439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.120487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.120632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.120859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.120908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.121143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.121364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.121419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.121584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.121733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.121774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.122006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.122202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.122231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.122431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.122619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.122665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.122821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.123052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.123083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.123254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.123407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.123441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.123624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.123846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.123896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.572 [2024-04-26 15:10:25.124092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.124325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.572 [2024-04-26 15:10:25.124358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.572 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.124577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.124782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.124842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.125034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.125219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.125251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.125439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.125649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.125735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.125979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.126152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.126183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.126369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.126544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.126605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.126785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.126966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.126996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.127158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.127276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.127305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.127504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.127736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.127786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.127950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.128088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.128131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.128345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.128588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.128639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.128847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.129048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.129091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.129286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.129521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.129567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.129760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.129931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.129960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.130207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.130454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.130502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.130726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.130923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.130952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.131127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.131303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.131344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.131515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.131767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.131830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.132065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.132211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.132241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.132481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.132684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.132739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.132928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.133091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.133122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.133309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.133510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.133564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.133748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.133973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.134006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.573 qpair failed and we were unable to recover it. 00:29:39.573 [2024-04-26 15:10:25.134213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.134445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.573 [2024-04-26 15:10:25.134492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.134661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.134857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.134920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.135102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.135292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.135322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.135483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.135663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.135693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.135875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.136078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.136122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.136256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.136467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.136527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.136776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.136981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.137011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.137282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.137538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.137588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.137793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.138026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.138068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.138239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.138505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.138557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.138827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.138987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.139015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.139250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.139461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.139507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.139729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.139961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.139991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.140254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.140445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.140494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.140662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.140865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.140924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.141149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.141421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.141472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.141730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.141937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.141967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.142207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.142507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.142556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.142739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.142964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.142995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.143192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.143362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.143436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.143715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.143926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.143955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.144078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.144286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.144316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.144558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.144842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.144894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.145094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.145320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.145385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.145628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.145845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.145897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.146103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.146402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.146458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.146648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.146832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.146861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.574 [2024-04-26 15:10:25.147000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.147155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.574 [2024-04-26 15:10:25.147184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.574 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.147373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.147611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.147673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.147899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.148055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.148095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.148243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.148397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.148427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.148638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.148806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.148841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.149047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.149272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.149302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.149488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.149689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.149739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.149942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.150130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.150161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.150374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.150513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.150562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.150743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.150941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.150970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.151165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.151350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.151373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.151531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.151687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.151716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.151940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.152181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.152210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.152374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.152509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.152532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.152715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.152904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.152933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.153092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.153346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.153410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.153664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.153831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.153861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.154134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.154301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.154359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.154511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.154734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.154788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.155032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.155279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.155309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.155540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.155766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.155816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.155955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.156112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.156142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.156333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.156575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.156632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.156842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.157000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.157037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.157269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.157522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.157569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.157780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.157936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.157959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.158188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.158321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.158345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.158593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.158751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.158803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.159050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.159249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.575 [2024-04-26 15:10:25.159273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.575 qpair failed and we were unable to recover it. 00:29:39.575 [2024-04-26 15:10:25.159516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.159715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.159766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.159984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.160196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.160223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.160398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.160657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.160710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.160916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.161071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.161110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.161319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.161533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.161581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.161787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.162012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.162049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.162195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.162444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.162493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.162688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.162893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.162923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.163160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.163347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.163376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.163564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.163802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.163852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.164097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.164301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.164350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.164580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.164783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.164833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.165029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.165235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.165265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.165453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.165696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.165749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.165914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.166119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.166150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.166384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.166624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.166675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.166888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.167108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.167139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.167386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.167591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.167639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.167841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.168039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.168069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.168278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.168499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.168549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.168760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.168942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.168971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.169225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.169382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.169431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.169651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.169831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.169893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.170065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.170254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.170283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.170481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.170659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.170709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.170890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.171132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.171184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.171385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.171618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.171669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.576 [2024-04-26 15:10:25.171909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.172112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.576 [2024-04-26 15:10:25.172142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.576 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.172379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.172627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.172679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.172924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.173111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.173170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.173347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.173560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.173608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.173785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.173952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.173981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.174189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.174451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.174501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.174706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.174842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.174883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.175143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.175358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.175413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.175621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.175902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.175954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.176158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.176401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.176451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.176643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.176852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.176885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.177042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.177256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.177311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.177546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.177744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.177794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.177962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.178193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.178223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.178428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.178666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.178718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.178961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.179186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.179215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.179411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.179612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.179660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.179870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.180091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.180122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.180354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.180659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.180709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.180952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.181198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.181229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.181425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.181663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.181719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.181951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.182098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.182128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.182303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.182501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.577 [2024-04-26 15:10:25.182557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.577 qpair failed and we were unable to recover it. 00:29:39.577 [2024-04-26 15:10:25.182770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.183008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.183047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.183259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.183481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.183530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.183722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.183917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.183946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.184169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.184423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.184477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.184719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.184958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.184988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.185205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.185416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.185464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.185721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.185956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.185986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.186243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.186492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.186546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.186709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.186968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.186998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.187231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.187471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.187522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.187725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.187945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.187974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.188201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.188371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.188425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.188662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.188878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.188929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.189142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.189348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.189398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.189595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.189837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.189890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.190149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.190454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.190502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.190762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.191034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.191074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.191277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.191522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.191575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.191808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.192042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.192072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.192260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.192421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.192499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.192689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.192916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.192946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.193142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.193356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.193386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.193620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.193886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.193936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.194189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.194418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.194467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.194678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.194915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.194945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.195139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.195343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.195402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.195589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.195795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.195844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.196083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.196288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.196341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.196586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.196805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.578 [2024-04-26 15:10:25.196852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.578 qpair failed and we were unable to recover it. 00:29:39.578 [2024-04-26 15:10:25.197003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.197222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.197251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.197472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.197718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.197769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.198012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.198225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.198255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.198506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.198760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.198811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.199026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.199237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.199267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.199516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.199718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.199770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.200009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.200263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.200293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.200534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.200747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.200797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.201041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.201212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.201242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.201438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.201689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.201741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.201936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.202123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.202154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.202352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.202563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.202612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.202852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.203097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.203127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.203337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.203579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.203629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.203823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.203979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.204009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.204261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.204467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.204519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.204761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.205000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.205038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.205284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.205546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.205598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.205798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.205997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.206036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.206291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.206464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.206515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.206758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.207004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.207043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.207242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.207469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.207518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.207720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.207917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.207969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.208259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.208469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.208521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.208761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.209034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.209069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.209338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.209516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.209563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.209760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.210000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.210040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.210289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.210543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.210593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.579 qpair failed and we were unable to recover it. 00:29:39.579 [2024-04-26 15:10:25.210816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.579 [2024-04-26 15:10:25.211049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.211080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.211332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.211552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.211599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.211846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.212042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.212073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.212327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.212597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.212645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.212868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.213049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.213100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.213306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.213500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.213551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.213672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.213872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.213897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.214055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.214236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.214265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.214417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.214573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.214602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.214753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.214898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.214923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.215092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.215278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.215308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.215495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.215676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.215705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.215874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.216015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.216046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.216239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.216360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.216389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.216545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.216698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.216727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.216877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.216999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.217052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.217196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.217346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.217375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.217564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.217692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.217721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.217870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.218053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.218078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.218207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.218394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.218423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.218610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.218734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.218763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.218914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.219087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.219128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.219295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.219447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.219476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.219603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.219753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.219782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.219942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.220089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.220114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.220286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.220488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.220517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.220645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.220764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.220793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.580 qpair failed and we were unable to recover it. 00:29:39.580 [2024-04-26 15:10:25.220994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.580 [2024-04-26 15:10:25.221203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.221228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.221345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.221537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.221566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.221728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.221911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.221941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.222060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.222198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.222223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.222422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.222601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.222630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.222784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.222966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.222995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.223219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.223424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.223484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.223659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.223937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.223967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.224124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.224334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.224364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.224527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.224758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.224810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.225007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.225266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.225296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.225533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.225788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.225839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.226089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.226332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.226362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.226581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.226794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.226847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.227046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.227262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.227292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.227547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.227742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.227794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.228048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.228251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.228281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.228493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.228704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.228754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.229024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.229294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.229323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.229546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.229802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.229854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.230097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.230311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.230340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.230558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.230808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.230858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.231104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.231303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.231332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.231542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.231752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.231804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.581 qpair failed and we were unable to recover it. 00:29:39.581 [2024-04-26 15:10:25.232043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.232257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.581 [2024-04-26 15:10:25.232288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.232493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.232703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.232753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.232951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.233193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.233223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.233399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.233610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.233664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.233887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.234127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.234158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.234369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.234601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.234650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.234867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.235109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.235139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.235382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.235605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.235656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.235921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.236187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.236217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.236441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.236643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.236694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.236955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.237227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.237258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.237512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.237767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.237817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.238068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.238314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.238343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.238545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.238771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.238821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.238991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.239270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.239301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.239554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.239772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.239823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.240079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.240296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.240327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.240493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.240750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.240799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.241006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.241251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.241281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.241532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.241721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.241770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.242032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.242257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.242287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.242478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.242719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.242770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.243028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.243268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.243297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.243472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.243683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.582 [2024-04-26 15:10:25.243734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.582 qpair failed and we were unable to recover it. 00:29:39.582 [2024-04-26 15:10:25.243981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.244247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.244278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.244453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.244679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.244732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.244976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.245231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.245261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.245460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.245636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.245690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.245918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.246138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.246169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.246427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.246684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.246735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.246939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.247178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.247213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.247460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.247731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.247779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.248036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.248230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.248259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.248470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.248719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.248769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.249011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.249216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.249245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.249487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.249745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.249795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.250001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.250255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.250286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.250454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.250657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.250711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.250959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.251169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.251200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.251407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.251648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.251701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.251925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.252119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.252154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.252364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.252583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.252635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.252837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.253080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.253111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.253368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.253594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.253645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.253902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.254122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.254152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.254358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.254623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.254673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.583 [2024-04-26 15:10:25.254879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.255079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.583 [2024-04-26 15:10:25.255110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.583 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.255373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.255635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.255686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.255911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.256168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.256198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.256420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.256643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.256699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.256927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.257127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.257162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.257425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.257672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.257722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.257988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.258182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.258212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.258473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.258744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.258794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.259033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.259289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.259319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.259575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.259757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.259810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.260068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.260261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.260291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.260505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.260770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.260819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.261077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.261301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.261331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.261502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.261764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.261815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.262071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.262270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.262304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.262464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.262733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.262785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.263048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.263308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.263338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.263526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.263740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.263791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.264048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.264303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.264333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.264552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.264822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.264873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.265085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.265361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.265423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.265684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.265859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.265888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.266035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.266292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.266322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.266522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.266746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.266795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.267048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.267252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.267282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.267547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.267773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.267826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.268066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.268294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.268323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.268576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.268771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.268823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.269082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.269285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.584 [2024-04-26 15:10:25.269315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.584 qpair failed and we were unable to recover it. 00:29:39.584 [2024-04-26 15:10:25.269521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.269754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.269809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.270064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.270322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.270352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.270561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.270787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.270836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.271096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.271351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.271381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.271633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.271857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.271906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.272133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.272399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.272450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.272692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.272908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.272960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.273220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.273478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.273528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.273736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.273946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.273976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.274246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.274500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.274548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.274810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.275026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.275056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.275317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.275589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.275639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.275901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.276115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.276146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.276327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.276597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.276645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.276896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.277127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.277157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.277370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.277634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.277686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.277911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.278092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.278122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.278337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.278536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.278566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.278799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.279037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.279068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.279287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.279552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.279609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.279869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.280119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.280150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.280385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.280637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.280695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.280891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.281097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.281128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.281382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.281604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.281653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.281902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.282153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.585 [2024-04-26 15:10:25.282183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.585 qpair failed and we were unable to recover it. 00:29:39.585 [2024-04-26 15:10:25.282452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.282622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.282680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.855 qpair failed and we were unable to recover it. 00:29:39.855 [2024-04-26 15:10:25.282945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.283193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.283223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.855 qpair failed and we were unable to recover it. 00:29:39.855 [2024-04-26 15:10:25.283480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.283739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.283792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.855 qpair failed and we were unable to recover it. 00:29:39.855 [2024-04-26 15:10:25.284049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.284313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.284343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.855 qpair failed and we were unable to recover it. 00:29:39.855 [2024-04-26 15:10:25.284567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.284821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.284851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.855 qpair failed and we were unable to recover it. 00:29:39.855 [2024-04-26 15:10:25.285119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.285367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.285396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.855 qpair failed and we were unable to recover it. 00:29:39.855 [2024-04-26 15:10:25.285650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.285907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.285958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.855 qpair failed and we were unable to recover it. 00:29:39.855 [2024-04-26 15:10:25.286224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.286457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.286505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.855 qpair failed and we were unable to recover it. 00:29:39.855 [2024-04-26 15:10:25.286711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.286960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.287010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.855 qpair failed and we were unable to recover it. 00:29:39.855 [2024-04-26 15:10:25.287234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.287464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.287517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.855 qpair failed and we were unable to recover it. 00:29:39.855 [2024-04-26 15:10:25.287697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.287909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.287939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.855 qpair failed and we were unable to recover it. 00:29:39.855 [2024-04-26 15:10:25.288228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.288500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.288553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.855 qpair failed and we were unable to recover it. 00:29:39.855 [2024-04-26 15:10:25.288816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.289037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.289068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.855 qpair failed and we were unable to recover it. 00:29:39.855 [2024-04-26 15:10:25.289321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.289595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.289644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.855 qpair failed and we were unable to recover it. 00:29:39.855 [2024-04-26 15:10:25.289902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.855 [2024-04-26 15:10:25.290145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.290175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.290430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.290649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.290699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.290908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.291126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.291156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.291340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.291557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.291605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.291818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.292076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.292107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.292366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.292584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.292636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.292892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.293101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.293131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.293351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.293569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.293619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.293821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.294081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.294111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.294358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.294620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.294672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.294927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.295176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.295207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.295362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.295633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.295683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.295939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.296187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.296218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.296442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.296713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.296764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.296978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.297246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.297276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.297445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.297695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.297747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.297957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.298211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.298242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.298451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.298674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.298722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.298975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.299202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.299233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.299449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.299730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.299780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.300044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.300271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.300301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.300556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.300817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.300868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.301125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.301401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.301449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.301705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.301926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.301977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.302251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.302452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.302502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.302755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.302964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.302993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.303168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.303402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.303451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.856 qpair failed and we were unable to recover it. 00:29:39.856 [2024-04-26 15:10:25.303712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.856 [2024-04-26 15:10:25.303935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.303987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.304247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.304514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.304562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.304817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.305077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.305108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.305360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.305614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.305662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.305914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.306140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.306170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.306426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.306609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.306657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.306884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.307084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.307112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.307330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.307544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.307594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.307812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.308066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.308097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.308352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.308607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.308658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.308918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.309079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.309110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.309327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.309588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.309637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.309855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.310104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.310134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.310346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.310608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.310658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.310877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.311138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.311168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.311427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.311648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.311696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.311954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.312228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.312259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.312485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.312712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.312760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.313046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.313254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.313284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.313508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.313774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.313824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.314037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.314246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.314276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.314525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.314786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.314837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.315063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.315270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.315300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.315558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.315833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.315882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.316094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.316302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.316332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.316551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.316803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.316854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.317109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.317381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.317430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.317639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.317903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.857 [2024-04-26 15:10:25.317955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.857 qpair failed and we were unable to recover it. 00:29:39.857 [2024-04-26 15:10:25.318220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.318424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.318471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.318682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.318951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.319002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.319274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.319542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.319593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.319843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.320099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.320129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.320389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.320654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.320703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.320913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.321132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.321157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.321351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.321580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.321631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.321891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.322100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.322131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.322382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.322657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.322705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.322975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.323243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.323274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.323501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.323756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.323807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.324060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.324315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.324347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.324519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.324750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.324799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.325058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.325321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.325352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.325592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.325834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.325885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.326131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.326399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.326448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.326666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.326893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.326943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.327159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.327401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.327456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.327671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.327884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.327914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.328140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.328350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.328381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.328583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.328877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.328926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.329147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.329413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.329464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.329713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.329926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.329961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.330178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.330387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.330449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.330682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.330902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.330932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.331145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.331347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.331410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.331628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.331853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.858 [2024-04-26 15:10:25.331910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.858 qpair failed and we were unable to recover it. 00:29:39.858 [2024-04-26 15:10:25.332111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.332346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.332401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.332603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.332825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.332872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.333096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.333283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.333316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.333523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.333796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.333848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.334107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.334290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.334322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.334583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.334860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.334915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.335144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.335381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.335435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.335648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.335884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.335947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.336176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.336452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.336518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.336680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.336904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.336934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.337188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.337428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.337479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.337730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.337929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.337958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.338168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.338412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.338462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.338715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.338924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.338955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.339180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.339414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.339472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.339691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.339904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.339938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.340192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.340469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.340521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.340708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.340949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.340978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.341223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.341506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.341556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.341823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.342001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.342037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.342264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.342473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.342537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.342754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.342930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.342961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.343207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.343369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.343422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.343634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.343859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.343912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.344132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.344345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.344400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.344626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.344858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.859 [2024-04-26 15:10:25.344914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.859 qpair failed and we were unable to recover it. 00:29:39.859 [2024-04-26 15:10:25.345125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.345359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.345425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.345640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.345842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.345872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.346124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.346357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.346419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.346682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.346892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.346922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.347134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.347419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.347474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.347684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.347904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.347934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.348191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.348427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.348479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.348699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.348920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.348950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.349167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.349432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.349484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.349759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.350004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.350051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.350271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.350504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.350555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.350768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.350991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.351030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.351293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.351574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.351626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.351883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.352117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.352148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.352383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.352579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.352630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.352896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.353120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.353149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.353304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.353485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.353549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.353707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.353858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.353888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.354052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.354192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.354236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.354496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.354678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.354738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.860 qpair failed and we were unable to recover it. 00:29:39.860 [2024-04-26 15:10:25.354929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.355128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.860 [2024-04-26 15:10:25.355158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.355322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.355597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.355649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.355868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.356122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.356152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.356350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.356554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.356606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.356872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.357091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.357120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.357268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.357466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.357524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.357737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.357925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.357955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.358132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.358269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.358298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.358499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.358721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.358773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.359040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.359239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.359264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.359543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.359825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.359877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.360093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.360252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.360296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.360465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.360734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.360793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.361049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.361218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.361246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.361425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.361699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.361747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.361972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.362134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.362165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.362402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.362661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.362713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.362930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.363133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.363163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.363374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.363607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.363654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.363870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.364109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.364139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.364376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.364566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.364618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.364824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.365000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.365038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.365189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.365318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.365342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.365533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.365703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.365732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.365928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.366131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.366162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.366329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.366483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.366506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.366728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.366983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.367014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.861 [2024-04-26 15:10:25.367250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.367450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.861 [2024-04-26 15:10:25.367500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.861 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.367766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.367987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.368017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.368251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.368484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.368540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.368730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.368929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.368958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.369176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.369333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.369357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.369576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.369842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.369898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.370170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.370352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.370416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.370628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.370862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.370914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.371157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.371335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.371408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.371631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.371908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.371962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.372203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.372471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.372522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.372711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.372868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.372897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.373118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.373345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.373406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.373643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.373847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.373877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.374053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.374308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.374338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.374558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.374828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.374881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.375109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.375291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.375321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.375526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.375750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.375801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.376007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.376261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.376291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.376507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.376654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.376678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.376886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.377112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.377143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.377377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.377539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.377587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.377751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.377968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.377999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.378176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.378412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.378463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.378638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.378825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.378890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.379120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.379279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.379320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.379578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.379814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.379866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.380084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.380247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.862 [2024-04-26 15:10:25.380277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.862 qpair failed and we were unable to recover it. 00:29:39.862 [2024-04-26 15:10:25.380530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.380728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.380781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.381037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.381249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.381278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.381491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.381674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.381724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.381985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.382263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.382295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.382508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.382775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.382822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.383036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.383242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.383272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.383531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.383793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.383840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.384101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.384306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.384336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.384578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.384792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.384844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.385009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.385144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.385185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.385475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.385709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.385769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.385963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.386143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.386174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.386298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.386495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.386533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.386709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.386906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.386935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.387149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.387350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.387413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.387588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.387849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.387898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.388168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.388423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.388476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.388699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.388960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.388990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.389167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.389376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.389424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.389660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.389883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.389930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.390122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.390355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.390418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.390669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.390845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.390875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.391030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.391176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.391208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.391423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.391601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.391651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.391857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.392064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.392094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.392293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.392523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.392573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.392767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.392920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.392949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.863 [2024-04-26 15:10:25.393166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.393436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.863 [2024-04-26 15:10:25.393486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.863 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.393638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.393920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.393971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.394202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.394409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.394463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.394700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.394909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.394939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.395199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.395423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.395472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.395737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.395909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.395939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.396081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.396321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.396351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.396538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.396755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.396804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.396919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.397145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.397176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.397449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.397721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.397772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.397990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.398214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.398245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.398508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.398706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.398756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.398973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.399085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.399128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.399350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.399575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.399626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.399846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.400047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.400078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.400258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.400453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.400491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.400746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.400958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.400987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.401198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.401445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.401498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.401719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.401922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.401952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.402206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.402485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.402534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.402765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.402991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.403029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.403203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.403415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.403477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.403735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.403950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.403979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.404223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.404461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.404511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.404727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.404984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.405016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.405255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.405474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.405526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.405788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.406051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.406081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.406293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.406528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.406583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.864 qpair failed and we were unable to recover it. 00:29:39.864 [2024-04-26 15:10:25.406838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.864 [2024-04-26 15:10:25.407091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.407126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.407383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.407657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.407706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.407974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.408206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.408236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.408439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.408581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.408641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.408767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.408986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.409016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.409238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.409413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.409437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.409623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.409900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.409950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.410171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.410376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.410428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.410605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.410839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.410891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.411122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.411288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.411317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.411521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.411754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.411807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.412053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.412318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.412347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.412555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.412788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.412840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.413036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.413168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.413196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.413431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.413692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.413742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.413949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.414164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.414194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.414383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.414635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.414689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.414934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.415140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.415171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.415382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.415603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.415662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.415882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.416096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.416138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.416352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.416497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.416531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.416858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.417035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.417066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.417277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.417479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.417530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.417795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.417934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.417963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.865 qpair failed and we were unable to recover it. 00:29:39.865 [2024-04-26 15:10:25.418176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.865 [2024-04-26 15:10:25.418416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.418469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.418693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.418960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.418990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.419181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.419403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.419453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.419651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.419918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.419967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.420244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.420474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.420531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.420745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.420954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.420984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.421211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.421474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.421527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.421751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.421957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.421986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.422199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.422347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.422409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.422547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.422713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.422742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.422922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.423077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.423108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.423269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.423498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.423559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.423684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.423880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.423910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.424125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.424277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.424307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.424476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.424686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.424715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.424876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.425090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.425120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.425305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.425559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.425611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.425797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.425915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.425938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.426205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.426373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.426425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.426610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.426828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.426858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.427049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.427185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.427228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.427416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.427642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.427695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.427811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.428025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.428055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.428294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.428494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.428555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.428815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.429068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.429098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.429345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.429527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.429579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.429789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.430007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.430047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.866 qpair failed and we were unable to recover it. 00:29:39.866 [2024-04-26 15:10:25.430328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.430545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.866 [2024-04-26 15:10:25.430596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.430820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.430989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.431028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.431238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.431428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.431485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.431740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.431941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.431972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.432153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.432369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.432424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.432676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.432893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.432941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.433160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.433342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.433408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.433673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.433937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.433996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.434257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.434460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.434509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.434752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.434922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.434952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.435177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.435396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.435447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.435680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.435880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.435932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.436166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.436439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.436487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.436717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.436920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.436950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.437141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.437375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.437436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.437655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.437817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.437868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.438050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.438265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.438295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.438519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.438722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.438786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.439012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.439243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.439268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.439537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.439699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.439753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.439937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.440169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.440200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.440463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.440683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.440734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.440944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.441203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.441234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.441497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.441669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.441718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.441925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.442134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.442165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.442328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.442562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.442616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.442842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.443056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.443087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.867 qpair failed and we were unable to recover it. 00:29:39.867 [2024-04-26 15:10:25.443254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.867 [2024-04-26 15:10:25.443427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.443488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.443700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.443888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.443918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.444172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.444343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.444373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.444593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.444821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.444873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.445135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.445308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.445345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.445558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.445720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.445773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.446030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.446172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.446202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.446442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.446620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.446670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.446891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.447108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.447164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.447389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.447606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.447655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.447872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.448148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.448179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.448380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.448612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.448664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.448877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.449140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.449171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.449399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.449618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.449667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.449893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.450085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.450115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.450319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.450542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.450592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.450818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.451038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.451069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.451259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.451474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.451532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.451695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.451890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.451922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.452083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.452234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.452263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.452450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.452670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.452722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.452989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.453267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.453298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.453519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.453750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.453800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.454047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.454261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.454295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.454501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.454717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.454767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.455030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.455215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.455245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.455503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.455719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.455770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.455967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.456221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.456251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.868 qpair failed and we were unable to recover it. 00:29:39.868 [2024-04-26 15:10:25.456454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.868 [2024-04-26 15:10:25.456654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.456710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.456899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.457115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.457146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.457325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.457541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.457589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.457807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.457994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.458031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.458264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.458489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.458540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.458762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.458939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.458968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.459185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.459428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.459479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.459701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.459914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.459944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.460175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.460410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.460462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.460720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.460982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.461012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.461313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.461503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.461555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.461783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.461958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.461989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.462226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.462454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.462506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.462777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.462984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.463013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.463286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.463494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.463544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.463769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.463951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.463981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.464266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.464521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.464572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.464789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.465004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.465043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.465260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.465475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.465529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.465776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.466037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.466067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.466329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.466561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.466612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.466862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.467096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.467130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.467352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.467578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.467629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.869 [2024-04-26 15:10:25.467853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.468102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.869 [2024-04-26 15:10:25.468132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.869 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.468316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.468572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.468627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.468817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.469051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.469081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.469242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.469487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.469537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.469734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.469951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.469980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.470173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.470404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.470473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.470633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.470856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.470912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.471087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.471296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.471330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.471554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.471713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.471764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.472028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.472244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.472273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.472501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.472733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.472781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.473040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.473307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.473337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.473539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.473771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.473822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.474052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.474308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.474339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.474554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.474826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.474877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.475068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.475347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.475400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.475658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.475882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.475930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.476115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.476312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.476352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.476532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.476757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.476811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.477046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.477241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.477271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.477537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.477745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.477796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.478059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.478280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.478310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.478569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.478799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.478859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.479073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.479290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.479320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.479542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.479742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.479793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.479985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.480247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.480277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.480512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.480734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.480786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.870 qpair failed and we were unable to recover it. 00:29:39.870 [2024-04-26 15:10:25.480974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.870 [2024-04-26 15:10:25.481201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.481232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.481488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.481695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.481744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.482001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.482276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.482306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.482580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.482843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.482894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.483147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.483351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.483414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.483670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.483894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.483950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.484173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.484398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.484449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.484705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.484964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.484994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.485287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.485522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.485573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.485826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.486084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.486114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.486340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.486570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.486620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.486871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.487054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.487085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.487314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.487533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.487580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.487847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.488110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.488139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.488366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.488584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.488632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.488883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.489049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.489084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.489337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.489606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.489655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.489878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.490135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.490165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.490397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.490615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.490663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.490883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.491166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.491217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.491431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.491591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.491628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.491866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.492107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.492138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.492315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.492578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.492628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.492882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.493152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.493204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.493423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.493703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.493753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.494008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.494283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.494320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.871 qpair failed and we were unable to recover it. 00:29:39.871 [2024-04-26 15:10:25.494577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.494843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.871 [2024-04-26 15:10:25.494895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.495174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.495399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.495449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.495720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.495939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.495968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.496229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.496508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.496559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.496818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.497036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.497065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.497284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.497514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.497565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.497795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.497977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.498007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.498247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.498525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.498573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.498798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.499056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.499086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.499269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.499479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.499529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.499799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.500065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.500095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.500355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.500580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.500630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.500886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.501152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.501182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.501391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.501616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.501667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.501934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.502145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.502175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.502356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.502580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.502629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.502879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.503142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.503173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.503355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.503627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.503676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.503839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.504043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.504067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.504262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.504492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.504543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.504753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.504959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.504988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.505210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.505472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.505520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.505771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.505978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.506006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.506287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.506555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.506603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.506862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.507070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.507101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.507305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.507576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.507624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.507895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.508128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.872 [2024-04-26 15:10:25.508158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.872 qpair failed and we were unable to recover it. 00:29:39.872 [2024-04-26 15:10:25.508412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.508665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.508715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.508974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.509240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.509270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.509529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.509790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.509840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.510100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.510352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.510382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.510639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.510906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.510957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.511160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.511355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.511417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.511678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.511889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.511941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.512195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.512468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.512518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.512782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.513038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.513068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.513342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.513602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.513652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.513915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.514120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.514151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.514404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.514639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.514690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.514896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.515155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.515185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.515458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.515663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.515713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.515943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.516161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.516192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.516445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.516650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.516702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.516928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.517190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.517221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.517479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.517754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.517805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.517966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.518174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.518199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.518425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.518660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.518711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.518923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.519135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.519165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.519377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.519560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.519610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.519826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.520084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.520125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.520381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.520601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.520652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.520915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.521128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.521158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.521421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.521645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.521694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.521923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.522133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.522164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.873 qpair failed and we were unable to recover it. 00:29:39.873 [2024-04-26 15:10:25.522428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.873 [2024-04-26 15:10:25.522642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.522693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.522970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.523203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.523233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.523455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.523677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.523728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.523981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.524229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.524260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.524537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.524810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.524858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.525075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.525302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.525331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.525576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.525763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.525825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.526094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.526356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.526386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.526642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.526862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.526913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.527169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.527396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.527444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.527673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.527939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.527989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.528217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.528462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.528514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.528774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.528994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.529034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.529301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.529527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.529577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.529741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.529993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.530038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.530210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.530481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.530533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.530744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.530998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.531036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.531262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.531485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.531535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.531787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.532001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.532040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.532255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.532439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.532490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.532703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.532967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.533028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.533207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.533371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.533395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.533663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.533946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.533997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.874 qpair failed and we were unable to recover it. 00:29:39.874 [2024-04-26 15:10:25.534198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.874 [2024-04-26 15:10:25.534462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.534514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.534770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.534947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.534977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.535152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.535426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.535476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.535740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.535949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.535979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.536254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.536468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.536518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.536750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.537002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.537039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.537306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.537575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.537623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.537886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.538088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.538119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.538393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.538617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.538667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.538895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.539162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.539192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.539410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.539635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.539693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.539955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.540221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.540252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.540509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.540738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.540787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.541038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.541252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.541282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.541541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.541806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.541855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.542127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.542404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.542451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.542704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.542915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.542967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.543218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.543490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.543540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.543753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.544003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.544039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.544245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.544475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.544525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.544784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.545002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.545040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.545294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.545578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.545626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.545871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.546047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.546078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.546322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.546599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.546650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.546878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.547098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.547129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.547347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.547561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.547609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.547804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.548065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.875 [2024-04-26 15:10:25.548095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.875 qpair failed and we were unable to recover it. 00:29:39.875 [2024-04-26 15:10:25.548362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.548582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.548632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.548846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.549029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.549060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.549283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.549508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.549560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.549812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.550070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.550101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.550366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.550543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.550595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.550855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.551069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.551099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.551308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.551498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.551547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.551748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.551946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.551976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.552242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.552451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.552501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.552768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.552953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.552983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.553221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.553429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.553482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.553737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.553999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.554039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.554253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.554523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.554571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.554828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.555097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.555128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.555351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.555569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.555621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.555872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.556137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.556167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.556368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.556598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.556652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.556910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.557170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.557200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.557468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.557696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.557746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.558002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.558278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.558308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.558580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.558849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.558898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.559161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.559378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.559429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.559658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.559908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.559959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.560171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.560434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.560481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.560736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.560992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.561029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.561307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.561505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.561555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.876 [2024-04-26 15:10:25.561819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.562051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.876 [2024-04-26 15:10:25.562086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.876 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.562342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.562585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.562635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.562896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.563105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.563136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.563353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.563577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.563627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.563828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.564037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.564067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.564289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.564507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.564559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.564810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.565034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.565064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.565280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.565460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.565510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.565718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.565931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.565982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.566267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.566539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.566589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.566842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.567089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.567124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.567389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.567656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.567708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.567927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.568139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.568169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.568434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.568705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.568756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.569032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.569246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.569277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.569521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.569745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.569797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.570062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.570277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.570307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.570532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.570797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.570847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.571119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.571302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.571332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.571533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.571760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.571811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.572067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.572287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.572323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.572531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.572799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.572849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.573026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.573295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.573326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.573506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.573770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.573820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.574084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.574295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.574325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.574549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.574774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.574825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.575049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.575256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.575287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.575458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.575703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.575760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.575940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.576198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.877 [2024-04-26 15:10:25.576229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.877 qpair failed and we were unable to recover it. 00:29:39.877 [2024-04-26 15:10:25.576465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.576667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.576717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.878 qpair failed and we were unable to recover it. 00:29:39.878 [2024-04-26 15:10:25.576934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.577191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.577226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.878 qpair failed and we were unable to recover it. 00:29:39.878 [2024-04-26 15:10:25.577446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.577667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.577729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.878 qpair failed and we were unable to recover it. 00:29:39.878 [2024-04-26 15:10:25.577982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.578255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.578285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.878 qpair failed and we were unable to recover it. 00:29:39.878 [2024-04-26 15:10:25.578497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.578748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.578800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.878 qpair failed and we were unable to recover it. 00:29:39.878 [2024-04-26 15:10:25.579062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.579276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.579305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.878 qpair failed and we were unable to recover it. 00:29:39.878 [2024-04-26 15:10:25.579539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.579759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.579824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.878 qpair failed and we were unable to recover it. 00:29:39.878 [2024-04-26 15:10:25.579990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.580175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.580205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.878 qpair failed and we were unable to recover it. 00:29:39.878 [2024-04-26 15:10:25.580384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.580617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.580671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.878 qpair failed and we were unable to recover it. 00:29:39.878 [2024-04-26 15:10:25.580906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.581162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.581192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.878 qpair failed and we were unable to recover it. 00:29:39.878 [2024-04-26 15:10:25.581447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.581672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.581741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.878 qpair failed and we were unable to recover it. 00:29:39.878 [2024-04-26 15:10:25.581966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.582218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.582248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.878 qpair failed and we were unable to recover it. 00:29:39.878 [2024-04-26 15:10:25.582469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.582692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.582743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.878 qpair failed and we were unable to recover it. 00:29:39.878 [2024-04-26 15:10:25.582996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.583233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.878 [2024-04-26 15:10:25.583264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:39.878 qpair failed and we were unable to recover it. 00:29:40.151 [2024-04-26 15:10:25.583448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.583704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.583735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.151 qpair failed and we were unable to recover it. 00:29:40.151 [2024-04-26 15:10:25.583908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.584164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.584196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.151 qpair failed and we were unable to recover it. 00:29:40.151 [2024-04-26 15:10:25.584406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.584661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.584713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.151 qpair failed and we were unable to recover it. 00:29:40.151 [2024-04-26 15:10:25.584941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.585156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.585186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.151 qpair failed and we were unable to recover it. 00:29:40.151 [2024-04-26 15:10:25.585404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.585664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.585694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.151 qpair failed and we were unable to recover it. 00:29:40.151 [2024-04-26 15:10:25.585865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.586121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.586151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.151 qpair failed and we were unable to recover it. 00:29:40.151 [2024-04-26 15:10:25.586329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.586559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.586610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.151 qpair failed and we were unable to recover it. 00:29:40.151 [2024-04-26 15:10:25.586816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.587074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.587105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.151 qpair failed and we were unable to recover it. 00:29:40.151 [2024-04-26 15:10:25.587380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.587573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.587625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.151 qpair failed and we were unable to recover it. 00:29:40.151 [2024-04-26 15:10:25.587879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.588115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.588146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.151 qpair failed and we were unable to recover it. 00:29:40.151 [2024-04-26 15:10:25.588323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.588583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.588655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.151 qpair failed and we were unable to recover it. 00:29:40.151 [2024-04-26 15:10:25.588923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.589136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.589167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.151 qpair failed and we were unable to recover it. 00:29:40.151 [2024-04-26 15:10:25.589378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.589642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.589691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.151 qpair failed and we were unable to recover it. 00:29:40.151 [2024-04-26 15:10:25.589860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.590063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.590094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.151 qpair failed and we were unable to recover it. 00:29:40.151 [2024-04-26 15:10:25.590305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.590531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.590581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.151 qpair failed and we were unable to recover it. 00:29:40.151 [2024-04-26 15:10:25.590815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.591087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.591117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.151 qpair failed and we were unable to recover it. 00:29:40.151 [2024-04-26 15:10:25.591342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.591535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.591564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.151 qpair failed and we were unable to recover it. 00:29:40.151 [2024-04-26 15:10:25.591744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.591940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.151 [2024-04-26 15:10:25.591969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.151 qpair failed and we were unable to recover it. 00:29:40.151 [2024-04-26 15:10:25.592128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.592334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.592367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.592589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.593924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.593960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.594161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.594382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.594431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.594665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.594852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.594882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.595081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.595241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.595270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.595508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.595711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.595763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.596027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.596164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.596193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.596358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.596466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.596490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.596684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.596834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.596866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.596989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.597134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.597164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.597355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.597572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.597601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.597750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.597975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.598007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.598205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.598358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.598397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.598568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.598722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.598761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.598941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.599112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.599142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.599310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.599516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.599573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.599773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.599937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.599965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.600079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.600208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.600237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.600355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.600545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.600574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.600748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.600903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.600946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.601130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.601279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.601309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.601526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.601733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.601786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.601933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.602105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.602147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.602270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.602445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.602474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.152 qpair failed and we were unable to recover it. 00:29:40.152 [2024-04-26 15:10:25.602609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.602833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.152 [2024-04-26 15:10:25.602862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.603097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.603225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.603255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.603481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.603701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.603732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.603999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.604156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.604186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.604346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.604629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.604680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.604924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.605051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.605081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.605216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.605408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.605437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.605615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.605807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.605837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.606051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.606216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.606245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.606409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.606614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.606666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.606881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.607011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.607112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.608503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.608784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.608836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.609040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.609192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.609221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.609360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.609536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.609561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.609692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.609834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.609862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.610014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.610145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.610175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.610362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.610542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.610600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.610755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.610891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.610920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.611045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.611186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.611216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.612030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.612192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.612222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.612403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.612560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.612589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.612735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.612880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.612910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.613069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.613212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.613241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.613425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.613616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.613674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.614474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.614665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.153 [2024-04-26 15:10:25.614696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.153 qpair failed and we were unable to recover it. 00:29:40.153 [2024-04-26 15:10:25.614863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.615017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.615060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.615215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.615375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.615404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.615550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.615700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.615730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.615887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.616089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.616119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.616249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.616360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.616389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.616572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.616723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.616753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.616923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.617076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.617103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.617290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.617418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.617449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.617600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.617734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.617765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.617926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.618084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.618127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.618282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.618411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.618440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.618624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.618771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.618803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.618956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.619109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.619136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.619298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.619511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.619560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.619725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.619879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.619909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.620050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.620912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.620946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.621113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.621233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.621263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.621382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.621525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.621554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.621717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.621870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.621894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.622056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.622185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.622214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.622376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.622518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.622547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.622701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.622840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.622865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.623008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.623192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.623218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.623392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.623540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.623569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.154 qpair failed and we were unable to recover it. 00:29:40.154 [2024-04-26 15:10:25.623696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.154 [2024-04-26 15:10:25.623835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.623865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.624015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.624178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.624205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.624333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.624497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.624526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.624647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.624802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.624832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.624963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.625102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.625129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.625253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.625403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.625433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.625579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.625728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.625757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.625898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.626045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.626090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.626255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.626385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.626414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.626537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.626678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.626708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.626874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.627031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.627062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.627187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.627325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.627354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.627517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.627683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.627718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.627871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.628038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.628081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.628228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.628378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.628408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.628555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.628749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.628777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.628909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.629038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.629071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.629230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.629385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.629414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.629573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.629702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.629731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.629896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.630064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.630106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.630223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.630390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.630418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.630563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.630671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.630699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.630890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.631089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.631116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.631312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.631540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.631569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.631752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.631930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.631959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.155 [2024-04-26 15:10:25.632127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.632325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.155 [2024-04-26 15:10:25.632349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.155 qpair failed and we were unable to recover it. 00:29:40.156 [2024-04-26 15:10:25.632560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.632767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.632790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.156 qpair failed and we were unable to recover it. 00:29:40.156 [2024-04-26 15:10:25.632975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.633144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.633178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.156 qpair failed and we were unable to recover it. 00:29:40.156 [2024-04-26 15:10:25.633352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.633549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.633587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.156 qpair failed and we were unable to recover it. 00:29:40.156 [2024-04-26 15:10:25.633742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.633957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.633985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.156 qpair failed and we were unable to recover it. 00:29:40.156 [2024-04-26 15:10:25.634164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.634368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.634392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.156 qpair failed and we were unable to recover it. 00:29:40.156 [2024-04-26 15:10:25.634580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.634718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.634755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.156 qpair failed and we were unable to recover it. 00:29:40.156 [2024-04-26 15:10:25.634916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.635115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.635145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.156 qpair failed and we were unable to recover it. 00:29:40.156 [2024-04-26 15:10:25.635307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.635490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.635547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.156 qpair failed and we were unable to recover it. 00:29:40.156 [2024-04-26 15:10:25.635714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.635851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.635874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.156 qpair failed and we were unable to recover it. 00:29:40.156 [2024-04-26 15:10:25.636040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.636197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.636226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.156 qpair failed and we were unable to recover it. 00:29:40.156 [2024-04-26 15:10:25.636437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.636573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.636630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.156 qpair failed and we were unable to recover it. 00:29:40.156 [2024-04-26 15:10:25.636825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.637001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.637041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.156 qpair failed and we were unable to recover it. 00:29:40.156 [2024-04-26 15:10:25.637165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.637292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.637322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.156 qpair failed and we were unable to recover it. 00:29:40.156 [2024-04-26 15:10:25.637537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.637681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.637745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.156 qpair failed and we were unable to recover it. 00:29:40.156 [2024-04-26 15:10:25.637965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.638094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.638120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.156 qpair failed and we were unable to recover it. 00:29:40.156 [2024-04-26 15:10:25.638245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.638396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.638425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.156 qpair failed and we were unable to recover it. 00:29:40.156 [2024-04-26 15:10:25.638581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.638698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.638727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.156 qpair failed and we were unable to recover it. 00:29:40.156 [2024-04-26 15:10:25.638858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.638993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.156 [2024-04-26 15:10:25.639040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.639180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.639293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.639323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.639465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.639641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.639670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.639858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.640033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.640059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.640249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.640410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.640474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.640592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.640734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.640763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.640893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.641032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.641056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.641210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.641440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.641469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.641622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.641834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.641862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.642063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.642227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.642255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.642408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.642557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.642586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.642764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.642903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.642932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.643090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.643222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.643246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.643428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.643610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.643639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.643791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.643939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.643969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.644124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.644257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.644289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.644482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.644595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.644624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.644736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.644892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.644921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.645089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.645204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.645228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.645381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.645499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.645528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.645689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.645874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.645907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.646069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.646207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.646232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.646424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.646614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.646666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.646821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.646941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.646969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.647131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.647330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.647353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.157 [2024-04-26 15:10:25.647497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.647688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.157 [2024-04-26 15:10:25.647717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.157 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.647874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.648100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.648129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.648288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.648420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.648444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.648568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.648709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.648737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.648923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.649064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.649093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.649242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.649447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.649487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.649608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.649783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.649811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.649990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.650152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.650182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.650352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.650466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.650489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.650653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.650796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.650835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.650985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.651140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.651165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.651283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.651439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.651477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.651622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.651804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.651833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.651992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.652130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.652159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.652314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.652449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.652472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.652682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.652825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.652853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.653003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.653164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.653193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.653327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.653483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.653505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.653790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.653894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.653922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.654118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.654237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.654266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.654455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.654565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.654588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.654709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.654866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.654894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.655053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.655178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.655206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.655400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.655531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.655571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.655750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.655945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.655973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.656135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.656239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.656263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.158 qpair failed and we were unable to recover it. 00:29:40.158 [2024-04-26 15:10:25.656467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.656631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.158 [2024-04-26 15:10:25.656684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.656902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.657068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.657098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.657220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.657390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.657420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.657550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.657703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.657726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.657955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.658101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.658131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.658253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.658419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.658448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.658603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.658781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.658819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.658971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.659109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.659139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.659291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.659459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.659488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.659633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.659788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.659810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.660017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.660175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.660205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.660371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.660537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.660565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.660689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.660857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.660879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.661038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.661167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.661196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.661374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.661517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.661546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.661723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.661857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.661894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.662070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.662200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.662228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.662404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.662549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.662577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.662702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.662871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.662894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.663034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.663188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.663217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.663384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.663533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.663560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.663703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.663874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.663897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.664074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.664217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.664246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.664452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.664596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.664624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.664774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.664880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.664903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.159 qpair failed and we were unable to recover it. 00:29:40.159 [2024-04-26 15:10:25.665107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.159 [2024-04-26 15:10:25.665250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.665274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.665492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.665610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.665637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.665788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.665929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.665951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.666104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.666245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.666269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.666438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.666554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.666582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.666749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.666883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.666906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.667086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.667231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.667259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.667453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.667625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.667685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.667832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.668044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.668076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.668200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.668363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.668392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.668518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.668652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.668689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.668826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.669031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.669082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.669217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.669702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.669736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.669924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.670055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.670086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.670235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.670386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.670411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.670601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.670743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.670771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.670939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.671076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.671107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.671240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.671415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.671455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.671632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.671780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.671808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.671947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.672068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.672097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.672209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.672347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.672371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.672551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.672717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.672746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.672885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.673085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.673114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.673277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.673437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.673479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.673623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.673767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.673795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.673965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.674111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.674140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.160 qpair failed and we were unable to recover it. 00:29:40.160 [2024-04-26 15:10:25.674246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.160 [2024-04-26 15:10:25.674394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.674418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.161 qpair failed and we were unable to recover it. 00:29:40.161 [2024-04-26 15:10:25.674608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.674772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.674801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.161 qpair failed and we were unable to recover it. 00:29:40.161 [2024-04-26 15:10:25.674942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.675083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.675112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.161 qpair failed and we were unable to recover it. 00:29:40.161 [2024-04-26 15:10:25.675222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.675369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.675407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.161 qpair failed and we were unable to recover it. 00:29:40.161 [2024-04-26 15:10:25.675542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.675678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.675706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.161 qpair failed and we were unable to recover it. 00:29:40.161 [2024-04-26 15:10:25.675878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.676008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.676067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.161 qpair failed and we were unable to recover it. 00:29:40.161 [2024-04-26 15:10:25.676216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.676389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.676430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.161 qpair failed and we were unable to recover it. 00:29:40.161 [2024-04-26 15:10:25.676580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.676725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.676753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.161 qpair failed and we were unable to recover it. 00:29:40.161 [2024-04-26 15:10:25.676895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.677006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.677045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.161 qpair failed and we were unable to recover it. 00:29:40.161 [2024-04-26 15:10:25.677195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.677340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.677380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.161 qpair failed and we were unable to recover it. 00:29:40.161 [2024-04-26 15:10:25.677528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.677717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.677744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.161 qpair failed and we were unable to recover it. 00:29:40.161 [2024-04-26 15:10:25.677891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.678038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.678069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.161 qpair failed and we were unable to recover it. 00:29:40.161 [2024-04-26 15:10:25.678196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.678344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.678368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.161 qpair failed and we were unable to recover it. 00:29:40.161 [2024-04-26 15:10:25.678531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.678704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.678732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.161 qpair failed and we were unable to recover it. 00:29:40.161 [2024-04-26 15:10:25.678892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.679089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.679120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.161 qpair failed and we were unable to recover it. 00:29:40.161 [2024-04-26 15:10:25.679255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.679425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.679449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.161 qpair failed and we were unable to recover it. 00:29:40.161 [2024-04-26 15:10:25.679633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.679779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.679806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.161 qpair failed and we were unable to recover it. 00:29:40.161 [2024-04-26 15:10:25.679953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.680104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.680134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.161 qpair failed and we were unable to recover it. 00:29:40.161 [2024-04-26 15:10:25.680318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.680491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.161 [2024-04-26 15:10:25.680537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.162 qpair failed and we were unable to recover it. 00:29:40.162 [2024-04-26 15:10:25.680729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.162 [2024-04-26 15:10:25.680865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.162 [2024-04-26 15:10:25.680894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.162 qpair failed and we were unable to recover it. 00:29:40.162 [2024-04-26 15:10:25.681069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.162 [2024-04-26 15:10:25.681191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.162 [2024-04-26 15:10:25.681219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.162 qpair failed and we were unable to recover it. 00:29:40.162 [2024-04-26 15:10:25.681414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.162 [2024-04-26 15:10:25.681548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.681585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.681767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.681905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.681934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.682051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.682173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.682202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.682369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.682491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.682515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.683014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.683202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.683233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.683377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.683853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.683885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.684042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.684169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.684195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.684336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.684484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.684512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.684681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.684793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.684822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.685008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.685145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.685171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.685277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.685407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.685435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.685582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.685705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.685732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.685908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.686089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.686121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.686263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.686391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.686418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.686538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.686682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.686712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.686833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.687045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.687088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.687236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.687378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.687407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.687549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.687709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.687738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.687876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.688032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.688067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.688196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.688337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.688365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.688482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.688619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.688648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.688801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.688933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.688957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.689124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.689260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.689293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.689433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.689582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.689610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.689725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.689854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.689877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.690073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.690190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.690215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.163 qpair failed and we were unable to recover it. 00:29:40.163 [2024-04-26 15:10:25.690399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.163 [2024-04-26 15:10:25.690537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.690565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.690720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.690891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.690916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.691078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.691180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.691209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.691384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.691537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.691565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.691732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.691892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.691935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.692082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.692188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.692217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.692365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.692535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.692568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.692723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.692853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.692878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.693057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.693168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.693196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.693350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.693484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.693513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.693655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.693801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.693825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.693973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.694129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.694158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.694273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.694411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.694439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.694605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.694741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.694765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.694914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.695072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.695099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.695273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.695463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.695492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.695618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.695777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.695805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.695929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.696048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.696078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.696222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.696361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.696390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.696558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.696701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.696726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.696885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.697032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.697062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.697205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.697345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.697374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.697509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.697643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.697668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.697813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.697996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.698032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.698188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.698324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.698353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.698461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.698626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.164 [2024-04-26 15:10:25.698655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.164 qpair failed and we were unable to recover it. 00:29:40.164 [2024-04-26 15:10:25.698797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.698937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.698961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.699139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.699271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.699297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.699480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.699593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.699622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.699771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.699902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.699926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.700076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.700248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.700277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.700449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.700563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.700592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.700734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.700871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.700896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.701078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.701251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.701279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.701458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.701601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.701628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.701763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.701927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.701953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.702056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.702215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.702239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.702392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.702528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.702554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.702695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.702858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.702885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.703065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.703238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.703265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.703408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.703546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.703572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.703744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.703883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.703910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.704067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.704180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.704205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.704356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.704479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.704508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.704682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.704821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.704849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.704970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.705096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.705124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.705271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.705381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.705408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.705589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.705774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.705798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.705970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.706100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.706125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.706288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.706424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.706452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.706640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.706766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.706807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.706927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.707082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.707108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.165 [2024-04-26 15:10:25.707241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.707373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.165 [2024-04-26 15:10:25.707406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.165 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.707550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.707689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.707718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.707916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.708052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.708100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.708244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.708369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.708399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.708546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.708670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.708699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.708872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.708996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.709032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.709171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.709295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.709338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.709495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.709617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.709646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.709793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.709906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.709938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.710085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.710199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.710224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.710386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.710504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.710533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.710708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.710836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.710874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.711034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.711162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.711188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.711307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.711445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.711476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.711604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.711737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.711761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.711891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.712012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.712052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.712212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.712322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.712350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.712514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.712654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.712692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.712816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.712966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.712997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.713144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.713262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.713288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.713424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.713577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.713601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.713739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.713887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.713916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.714050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.714183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.714209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.714333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.714483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.714507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.714626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.714755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.714783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.714910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.715039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.715084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.166 [2024-04-26 15:10:25.715192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.715338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.166 [2024-04-26 15:10:25.715362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.166 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.715520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.715630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.715658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.716729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.716894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.716924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.717081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.717200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.717226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.717368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.717541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.717571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.717708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.717850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.717879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.718055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.718171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.718198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.718367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.718519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.718548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.718691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.718843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.718872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.719648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.719809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.719839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.719994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.720131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.720158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.720271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.720432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.720461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.720603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.720739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.720762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.720882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.721030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.721075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.721220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.721336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.721365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.721513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.721617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.721640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.721772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.721903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.721932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.722083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.722209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.722236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.722359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.722485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.722509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.722637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.722778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.722806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.722920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.723081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.723108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.723216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.723339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.167 [2024-04-26 15:10:25.723364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.167 qpair failed and we were unable to recover it. 00:29:40.167 [2024-04-26 15:10:25.723515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.723660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.723689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.723814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.723980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.724009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.724177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.724287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.724327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.724477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.724649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.724678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.724795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.724936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.724965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.725094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.725212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.725239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.725367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.725552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.725581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.725736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.725863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.725895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.726050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.726155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.726181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.726328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.726479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.726509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.726626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.726741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.726769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.726909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.727061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.727088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.727201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.727343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.727374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.727523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.727675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.727707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.727875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.728032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.728070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.728214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.728354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.728383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.728489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.728602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.728633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.728797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.728918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.728944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.729118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.729236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.729263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.729428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.729572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.729601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.729723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.729869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.729897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.730059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.730193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.730218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.730363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.730521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.730550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.730707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.730840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.730866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.168 [2024-04-26 15:10:25.731054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.731193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.168 [2024-04-26 15:10:25.731220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.168 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.731384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.731496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.731526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.731669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.731818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.731842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.731990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.732135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.732162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.732266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.732396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.732425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.732602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.732736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.732778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.732899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.733043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.733087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.733206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.733341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.733383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.733552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.733705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.733730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.733857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.733978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.734006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.734160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.734267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.734292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.734456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.734593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.734617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.734820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.734967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.734997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.735142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.735259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.735286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.735440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.735549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.735574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.735701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.735873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.735904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.736052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.736192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.736218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.736363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.736519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.736563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.736677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.736813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.736842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.736993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.737183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.737210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.737392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.737564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.737594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.737740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.737895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.737924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.738069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.738190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.738220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.738374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.738499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.738526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.738663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.738796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.738825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.738935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.739059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.739089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.169 qpair failed and we were unable to recover it. 00:29:40.169 [2024-04-26 15:10:25.739233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.739368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.169 [2024-04-26 15:10:25.739392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.739539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.739655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.739685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.739819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.739988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.740026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.740186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.740299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.740340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.740490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.740663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.740692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.740830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.740967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.740995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.741163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.741328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.741353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.741507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.741646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.741678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.741802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.741954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.741984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.742156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.742292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.742332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.742452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.742558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.742586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.742768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.742912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.742940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.743047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.743196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.743222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.743344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.743485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.743514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.743618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.743749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.743778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.743888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.743990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.744036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.744169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.744307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.744335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.744478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.744622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.744655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.744795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.744948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.744977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.745104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.745233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.745258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.745443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.745547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.745576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.745742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.745873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.745896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.746056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.746202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.746228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.746397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.746535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.746564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.746744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.746914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.746943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.747083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.747223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.747252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.170 [2024-04-26 15:10:25.747413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.747552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.170 [2024-04-26 15:10:25.747580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.170 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.747712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.747813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.747840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.747991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.748165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.748194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.748334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.748476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.748504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.748644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.748799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.748822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.748975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.749113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.749142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.749291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.749435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.749463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.749593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.749757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.749779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.749937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.750089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.750119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.750235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.750377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.750405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.750572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.750696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.750718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.750889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.750998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.751036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.751159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.751295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.751324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.751475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.751639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.751662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.751796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.751931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.751959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.752099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.752250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.752278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.752432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.752587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.752624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.752766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.752900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.752928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.753116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.753230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.753256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.753399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.753506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.753530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.753686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.753819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.753847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.753994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.754125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.754154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.754336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.754480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.754523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.754639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.754771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.754800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.754943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.755119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.755148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.755318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.755447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.755470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.171 qpair failed and we were unable to recover it. 00:29:40.171 [2024-04-26 15:10:25.755650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.171 [2024-04-26 15:10:25.755825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.755853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.755999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.756177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.756206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.756373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.756530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.756568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.756702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.756871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.756899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.757052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.757199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.757228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.757349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.757473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.757496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.757695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.757838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.757866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.758011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.758157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.758186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.758298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.758453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.758477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.758623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.758766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.758794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.758904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.759047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.759077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.759222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.759394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.759433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.759607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.759775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.759803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.759943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.760094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.760122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.760268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.760417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.760441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.760596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.760723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.760750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.760935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.761052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.761082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.761223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.761397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.761421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.761597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.761735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.761764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.761932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.762079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.762106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.762255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.762392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.762414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.762591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.762756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.762785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.762954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.763099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.763128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.172 qpair failed and we were unable to recover it. 00:29:40.172 [2024-04-26 15:10:25.763265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.763446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.172 [2024-04-26 15:10:25.763482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.763639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.763811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.763839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.764002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.764155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.764183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.764366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.764505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.764545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.764686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.764803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.764832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.764966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.765112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.765142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.765274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.765420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.765443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.765590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.765724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.765752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.765899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.766068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.766098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.766266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.766421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.766462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.766588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.766754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.766782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.766950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.767118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.767147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.767332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.767445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.767468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.767626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.767792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.767821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.767969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.768142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.768171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.768354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.768492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.768531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.768699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.768829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.768858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.769050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.769160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.769189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.769347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.769470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.769493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.769639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.769788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.769816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.769997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.770175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.770204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.770311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.770460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.770483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.770660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.770792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.770820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.770965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.771114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.173 [2024-04-26 15:10:25.771141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.173 qpair failed and we were unable to recover it. 00:29:40.173 [2024-04-26 15:10:25.771270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.771391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.771415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.771588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.771729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.771757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.771926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.772072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.772102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.772275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.772437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.772476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.772659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.772802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.772831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.773002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.773152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.773181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.773344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.773492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.773516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.773669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.773805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.773834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.773948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.774116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.774146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.774294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.774492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.774514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.774700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.774842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.774870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.775015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.775176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.775205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.775395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.775514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.775536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.775668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.775772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.775800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.775921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.776073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.776103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.776251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.776383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.776404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.776563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.776707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.776736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.776850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.776989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.777039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.777213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.777349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.777388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.777562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.777676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.777704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.777850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.777984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.778012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.778190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.778354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.778395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.778542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.778710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.778738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.778878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.778988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.779016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.779152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.779285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.779327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.174 qpair failed and we were unable to recover it. 00:29:40.174 [2024-04-26 15:10:25.779507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.174 [2024-04-26 15:10:25.779649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.779677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.779816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.779959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.779987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.780165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.780306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.780344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.780458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.780635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.780664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.780777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.780926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.780955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.781107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.781243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.781266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.781394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.781560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.781589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.781739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.781883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.781912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.782080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.782235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.782259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.782409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.782581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.782610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.782754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.782865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.782893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.783075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.783249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.783273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.783428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.783559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.783588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.783706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.783842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.783871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.784051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.784182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.784206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.784363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.784529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.784558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.784692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.784833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.784861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.785029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.785161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.785184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.785364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.785530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.785558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.785698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.785837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.785865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.786053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.786235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.786259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.786411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.786587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.786621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.786817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.786987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.787015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.787174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.787273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.787296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.787478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.787624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.787653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.787821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.787989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.788017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.788202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.788335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.788375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.175 qpair failed and we were unable to recover it. 00:29:40.175 [2024-04-26 15:10:25.788551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.788665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.175 [2024-04-26 15:10:25.788693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.788834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.788999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.789035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.789189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.789344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.789367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.789543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.789684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.789713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.789861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.789996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.790042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.790184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.790346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.790369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.790525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.790670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.790698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.790842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.790982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.791015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.791172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.791339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.791377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.791527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.791694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.791722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.791896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.792063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.792092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.792230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.792382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.792408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.792564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.792713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.792741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.792880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.793024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.793053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.793209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.793341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.793364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.793484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.793654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.793683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.793849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.794013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.794049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.794230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.794364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.794413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.794562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.794701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.794729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.794873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.795017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.795052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.795186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.795322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.795345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.795487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.795634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.795663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.795797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.795961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.795989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.796138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.796263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.796287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.796470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.796608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.796637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.176 [2024-04-26 15:10:25.796778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.796944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.176 [2024-04-26 15:10:25.796972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.176 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.797150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.797323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.797352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.797525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.797706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.797744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.797882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.798030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.798059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.798206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.798342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.798366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.798522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.798639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.798667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.798816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.798929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.798957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.799115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.799246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.799271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.799442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.799622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.799667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.799837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.799985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.800014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.800165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.800294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.800334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.800449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.800595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.800624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.800762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.800930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.800959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.801140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.801268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.801293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.801448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.801619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.801648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.801764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.801902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.801930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.802117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.802222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.802246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.802392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.802531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.802559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.802698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.802864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.802892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.803046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.803205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.803229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.803358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.803523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.803551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.803700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.803841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.803870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.804046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.804183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.804207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.804309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.804450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.804478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.177 [2024-04-26 15:10:25.804615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.804724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.177 [2024-04-26 15:10:25.804752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.177 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.804899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.805043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.805067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.805212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.805329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.805357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.805531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.805701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.805730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.805898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.806067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.806110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.806284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.806405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.806433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.806598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.806767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.806795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.806944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.807110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.807151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.807311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.807484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.807518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.807676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.807845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.807874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.808037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.808201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.808226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.808355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.808518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.808547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.808688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.808857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.808885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.809052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.809181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.809206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.809327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.809465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.809494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.809602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.809773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.809801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.809939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.810104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.810129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.810289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.810443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.810471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.810604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.810785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.810814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.810969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.811127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.811169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.811305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.811481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.811509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.178 qpair failed and we were unable to recover it. 00:29:40.178 [2024-04-26 15:10:25.811648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.178 [2024-04-26 15:10:25.811779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.811807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.811951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.812062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.812103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.812235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.812406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.812434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.812602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.812710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.812739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.812865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.812961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.812984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.813142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.813306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.813334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.813500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.813621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.813647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.813787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.813946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.813969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.814140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.814317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.814346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.814483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.814624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.814653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.814833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.815000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.815037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.815219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.815375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.815409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.815596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.815778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.815811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.816007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.816193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.816217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.816355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.816488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.816517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.816636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.816774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.816802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.816968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.817117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.817141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.817324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.817463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.817491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.817635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.817743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.817772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.817886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.818052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.818077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.818233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.818372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.818400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.818509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.818660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.818688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.818852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.819025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.819064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.819194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.819333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.819361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.819513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.819683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.819712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.179 qpair failed and we were unable to recover it. 00:29:40.179 [2024-04-26 15:10:25.819881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.179 [2024-04-26 15:10:25.820028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.820057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.820200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.820361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.820389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.820498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.820644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.820672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.820855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.821027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.821071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.821206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.821366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.821394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.821564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.821750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.821783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.821970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.822120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.822145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.822276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.822410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.822438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.822581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.822754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.822783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.822935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.823092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.823117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.823249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.823421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.823449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.823644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.823786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.823814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.823990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.824139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.824181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.824330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.824457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.824485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.824616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.824763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.824791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.824945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.825054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.825093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.825249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.825406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.825439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.825594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.825709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.825737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.825883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.826039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.826083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.826220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.826356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.826385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.826561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.826703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.826732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.826892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.827060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.827101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.827197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.827368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.827393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.827583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.827734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.827764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.827919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.828078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.828102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.180 [2024-04-26 15:10:25.828268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.828397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.180 [2024-04-26 15:10:25.828424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.180 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.828542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.828657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.828685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.828824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.828973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.829014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.829181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.829332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.829361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.829506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.829624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.829654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.829815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.829941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.829967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.830101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.830249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.830279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.830404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.830542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.830571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.830707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.830854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.830878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.831056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.831192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.831221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.831366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.831540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.831569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.831693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.831853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.831880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.832037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.832197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.832224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.832385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.832520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.832549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.832692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.832852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.832878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.833031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.833206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.833235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.833372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.833476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.833504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.833641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.833745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.833768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.833930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.834110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.834142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.834294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.834414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.834443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.834625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.834726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.834750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.834894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.835038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.835068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.835181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.835321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.835353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.835532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.835655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.835679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.835825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.835963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.835993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.836172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.836315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.836344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.181 qpair failed and we were unable to recover it. 00:29:40.181 [2024-04-26 15:10:25.836467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.181 [2024-04-26 15:10:25.836571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.836597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.836750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.836867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.836897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.837028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.837202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.837232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.837393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.837528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.837551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.837732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.837853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.837884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.838001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.838163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.838192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.838363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.838471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.838494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.838635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.838771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.838802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.838921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.839095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.839121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.839238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.839374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.839398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.839521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.839661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.839691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.839804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.839943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.839975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.840133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.840266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.840311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.840440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.840582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.840610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.840778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.840946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.840977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.841159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.841286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.841311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.841430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.841546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.841575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.841722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.841891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.841920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.842056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.842208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.842231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.842392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.842533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.842562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.842724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.842899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.842928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.843088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.843222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.843248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.843405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.843522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.843555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.843682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.843821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.843850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.843970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.844097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.844123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.182 qpair failed and we were unable to recover it. 00:29:40.182 [2024-04-26 15:10:25.844219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.844332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.182 [2024-04-26 15:10:25.844360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.844530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.844645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.844674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.844832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.844997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.845045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.845187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.845355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.845387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.845533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.845650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.845679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.845845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.846010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.846047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.846233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.846391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.846419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.846559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.846679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.846714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.846878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.847032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.847076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.847216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.847360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.847388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.847531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.847672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.847702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.847851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.847995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.848035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.848196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.848339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.848371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.848543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.848724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.848758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.848962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.849088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.849130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.849269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.849395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.849427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.849555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.849698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.849726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.849868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.850016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.850061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.850228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.850375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.850403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.850545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.850692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.850720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.183 [2024-04-26 15:10:25.850847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.850992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.183 [2024-04-26 15:10:25.851015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.183 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.851165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.851300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.851329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.851473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.851625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.851654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.851833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.851939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.851968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.852130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.852295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.852322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.852483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.852632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.852661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.852816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.852954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.852983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.853166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.853321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.853353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.853472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.853583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.853611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.853801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.853934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.853975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.854161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.854334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.854363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.854480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.854629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.854658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.854841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.854972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.854996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.855159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.855286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.855314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.855490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.855608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.855637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.855775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.855925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.855951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.856096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.856256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.856284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.856453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.856617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.856654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.856802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.856942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.856967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.857157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.857276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.857305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.857447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.857556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.857585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.857692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.857823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.857846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.858033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.858174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.858202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.858374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.858490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.858520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.858626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.858723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.858746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.858915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.859059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.859089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.859210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.859350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.859379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.184 qpair failed and we were unable to recover it. 00:29:40.184 [2024-04-26 15:10:25.859510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.184 [2024-04-26 15:10:25.859647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.859671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.859828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.859986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.860015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.860161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.860326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.860350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.860489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.860620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.860645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.860797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.860968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.860997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.861149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.861286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.861316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.861460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.861585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.861609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.861765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.861899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.861927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.862074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.862187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.862215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.862401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.862506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.862530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.862666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.862780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.862807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.862983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.863099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.863128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.863275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.863415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.863439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.863575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.863722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.863750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.863892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.864034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.864062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.864219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.864348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.864372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.864559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.864697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.864726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.864867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.865011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.865066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.865191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.865360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.865383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.865571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.865694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.865723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.865834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.865972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.866000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.866165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.866338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.866362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.866520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.866651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.866679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.866796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.866964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.866992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.867118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.867256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.867280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.867412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.867578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.867606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.867749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.867914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.185 [2024-04-26 15:10:25.867941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.185 qpair failed and we were unable to recover it. 00:29:40.185 [2024-04-26 15:10:25.868080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.868217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.868241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.868374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.868511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.868540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.868712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.868829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.868857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.869023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.869169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.869192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.869315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.869462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.869490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.869635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.869803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.869831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.869995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.870155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.870179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.870287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.870467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.870495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.870670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.870818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.870846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.871029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.871165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.871207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.871374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.871536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.871600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.871762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.871950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.871979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.872168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.872322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.872350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.872498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.872609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.872637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.872781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.872927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.872956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.873131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.873264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.873288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.873437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.873626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.873688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.873835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.873980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.874008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.874167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.874305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.874346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.874496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.874639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.874666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.874808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.874927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.874954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.875085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.875244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.875267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.875394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.875509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.875537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.186 qpair failed and we were unable to recover it. 00:29:40.186 [2024-04-26 15:10:25.875687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.875854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.186 [2024-04-26 15:10:25.875883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.187 qpair failed and we were unable to recover it. 00:29:40.187 [2024-04-26 15:10:25.876045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.187 [2024-04-26 15:10:25.876186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.187 [2024-04-26 15:10:25.876210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.187 qpair failed and we were unable to recover it. 00:29:40.465 [2024-04-26 15:10:25.876319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.465 [2024-04-26 15:10:25.876433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.465 [2024-04-26 15:10:25.876461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.465 qpair failed and we were unable to recover it. 00:29:40.465 [2024-04-26 15:10:25.876637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.465 [2024-04-26 15:10:25.876777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.465 [2024-04-26 15:10:25.876805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.465 qpair failed and we were unable to recover it. 00:29:40.465 [2024-04-26 15:10:25.876967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.465 [2024-04-26 15:10:25.877116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.465 [2024-04-26 15:10:25.877141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.465 qpair failed and we were unable to recover it. 00:29:40.465 [2024-04-26 15:10:25.877314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.877454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.877484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.877635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.877749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.877777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.877908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.878077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.878103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.878280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.878420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.878450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.878597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.878743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.878771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.878906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.879052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.879077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.879244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.879393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.879422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.879566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.879714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.879742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.879892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.880046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.880069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.880206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.880401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.880458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.880604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.880772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.880800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.880973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.881107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.881131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.881293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.881486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.881535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.881654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.881795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.881823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.881983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.882165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.882190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.882326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.882470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.882497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.882640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.882779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.882807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.882984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.883133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.883158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.883290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.883461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.883488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.883633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.883803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.883831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.883976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.884144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.884183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.884358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.884536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.884588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.884691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.884862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.884889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.885036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.885192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.885215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.885361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.885475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.885503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.885672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.885827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.885855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.886060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.886171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.886213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.886356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.886470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.886499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.466 qpair failed and we were unable to recover it. 00:29:40.466 [2024-04-26 15:10:25.886668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.886845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.466 [2024-04-26 15:10:25.886873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.887015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.887166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.887189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.887332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.887502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.887530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.887646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.887785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.887813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.887990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.888145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.888169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.888340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.888480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.888508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.888678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.888816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.888844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.888985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.889187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.889210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.889387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.889576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.889624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.889797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.889943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.889971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.890160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.890289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.890312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.890484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.890601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.890629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.890778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.890924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.890955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.891139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.891250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.891274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.891454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.891631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.891687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.891861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.892003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.892037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.892183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.892305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.892328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.892476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.892643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.892671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.892838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.893017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.893055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.893237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.893371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.893410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.893546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.893656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.893684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.893802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.893950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.893978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.894140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.894276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.894299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.894485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.894625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.894653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.894761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.894936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.894963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.895111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.895269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.895291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.895412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.895532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.895560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.895712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.895847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.467 [2024-04-26 15:10:25.895876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.467 qpair failed and we were unable to recover it. 00:29:40.467 [2024-04-26 15:10:25.896042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.896150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.896178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.896333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.896474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.896502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.896643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.896816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.896843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.897027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.897155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.897195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.897340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.897477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.897505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.897642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.897782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.897809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.897956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.898114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.898155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.898328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.898488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.898549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.898694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.898832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.898860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.899033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.899186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.899209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.899350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.899514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.899546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.899682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.899849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.899878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.900044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.900169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.900192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.900294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.900468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.900496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.900645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.900791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.900819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.901007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.901166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.901189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.901357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.901470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.901497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.901633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.901745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.901773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.901938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.902097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.902134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.902292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.902406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.902433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.902601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.902741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.902773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.902909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.903055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.903080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.903228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.903395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.468 [2024-04-26 15:10:25.903422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.468 qpair failed and we were unable to recover it. 00:29:40.468 [2024-04-26 15:10:25.903562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.903700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.903729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.903852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.904016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.904058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.904211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.904377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.904404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.904521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.904690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.904717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.904873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.905009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.905043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.905217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.905419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.905468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.905586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.905730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.905758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.905918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.906028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.906071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.906223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.906355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.906383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.906555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.906727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.906756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.906916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.907050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.907090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.907207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.907338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.907365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.907479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.907650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.907678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.907843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.907969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.907992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.908175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.908311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.908339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.908449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.908566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.908593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.908732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.908858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.908880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.909030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.909181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.909209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.909384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.909527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.909555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.909672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.909797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.909819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.909938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.910082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.910110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.469 qpair failed and we were unable to recover it. 00:29:40.469 [2024-04-26 15:10:25.910278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.469 [2024-04-26 15:10:25.910399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.910427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.910598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.910732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.910755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.910938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.911052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.911092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.911213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.911350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.911378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.911525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.911652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.911674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.911850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.911988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.912015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.912179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.912342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.912370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.912499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.912655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.912678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.912837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.913002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.913035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.913162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.913304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.913331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.913468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.913624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.913646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.913760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.913876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.913904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.914073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.914185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.914213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.914325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.914469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.914492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.914648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.914791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.914819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.914959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.915098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.915126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.915312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.915435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.915475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.915618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.915763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.915791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.915972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.916089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.916117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.916243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.916413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.916435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.916581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.916749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.916777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.916945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.917083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.917111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.917235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.917372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.917394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.917579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.917745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.917773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.470 qpair failed and we were unable to recover it. 00:29:40.470 [2024-04-26 15:10:25.917883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.470 [2024-04-26 15:10:25.918025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.918054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.918223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.918356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.918379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.918510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.918695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.918723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.918830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.918950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.918977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.919163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.919296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.919335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.919489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.919627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.919656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.919822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.919991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.920026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.920145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.920250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.920273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.920418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.920560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.920587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.920728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.920860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.920889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.921054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.921216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.921259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.921424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.921599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.921656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.921825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.921974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.922003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.922139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.922298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.922335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.922505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.922620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.922648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.922764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.922877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.922904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.923049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.923190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.923212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.923366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.923535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.923563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.923706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.923847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.923874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.924033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.924189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.924228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.924402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.924518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.924546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.924725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.924870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.924898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.925059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.925226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.925266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.471 qpair failed and we were unable to recover it. 00:29:40.471 [2024-04-26 15:10:25.925441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.471 [2024-04-26 15:10:25.925630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.925679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.925819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.925992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.926026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.926177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.926343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.926365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.926491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.926638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.926666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.926806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.926980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.927008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.927200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.927352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.927413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.927554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.927722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.927750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.927889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.928035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.928063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.928223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.928381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.928405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.928588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.928729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.928757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.928926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.929047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.929079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.929252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.929411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.929436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.929588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.929756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.929785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.929900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.930068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.930098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.930247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.930415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.930453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.930606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.930712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.930741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.930880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.931051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.931080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.931258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.931388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.931424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.931556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.931700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.931727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.931847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.931981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.932008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.932135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.932244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.932266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.932425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.932533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.932560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.932704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.932846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.932875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.472 qpair failed and we were unable to recover it. 00:29:40.472 [2024-04-26 15:10:25.933033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.472 [2024-04-26 15:10:25.933154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.933178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.933333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.933448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.933475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.933622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.933766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.933794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.933937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.934050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.934089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.934216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.934368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.934396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.934533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.934700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.934728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.934832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.935006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.935040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.935196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.935343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.935371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.935538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.935674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.935702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.935864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.936035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.936089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.936218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.936360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.936388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.936556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.936696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.936725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.936898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.937017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.937065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.937199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.937336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.937364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.937506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.937671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.937698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.937835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.937959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.937981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.938143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.938275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.938298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.938447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.938584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.938612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.938779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.938921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.938943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.939095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.939235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.939263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.939397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.939540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.939567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.939740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.939837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.939858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.940008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.940130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.940159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.940295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.940463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.940491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.940620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.940748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.940769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.940918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.941055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.941083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.473 [2024-04-26 15:10:25.941254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.941365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.473 [2024-04-26 15:10:25.941394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.473 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.941541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.941676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.941699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.941814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.941956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.941983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.942133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.942274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.942302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.942412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.942539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.942562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.942747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.942858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.942886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.943035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.943177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.943205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.943378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.943511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.943548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.943661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.943830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.943858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.943965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.944114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.944144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.944287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.944435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.944458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.944645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.944773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.944805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.944923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.945061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.945090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.945236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.945381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.945403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.945588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.945738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.945766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.945874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.946015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.946057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.946208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.946342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.946365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.946495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.946638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.946666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.946777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.946907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.946935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.947060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.947159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.947182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.947347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.947450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.947478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.474 qpair failed and we were unable to recover it. 00:29:40.474 [2024-04-26 15:10:25.947622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.947770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.474 [2024-04-26 15:10:25.947802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.947946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.948072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.948095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.948276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.948387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.948415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.948564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.948704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.948732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.948900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.949030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.949055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.949214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.949347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.949375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.949476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.949581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.949609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.949766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.949926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.949966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.950067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.950214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.950237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.950387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.950531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.950559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.950734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.950861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.950887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.951039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.951210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.951238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.951403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.951570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.951598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.951730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.951851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.951873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.952007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.952179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.952206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.952375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.952561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.952626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.952764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.952890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.952913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.953055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.953201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.953224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.953407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.953551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.953579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.953728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.953857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.953879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.954011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.954128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.954156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.954328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.954495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.954522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.954646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.954741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.954764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.954949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.955089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.475 [2024-04-26 15:10:25.955119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.475 qpair failed and we were unable to recover it. 00:29:40.475 [2024-04-26 15:10:25.955286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.955423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.955450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.955586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.955751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.955772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.955903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.956044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.956072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.956239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.956378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.956407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.956553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.956679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.956701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.956851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.957031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.957059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.957231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.957365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.957394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.957567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.957691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.957713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.957865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.958007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.958058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.958204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.958373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.958400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.958522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.958650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.958671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.958823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.958988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.959016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.959179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.959290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.959317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.959465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.959598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.959620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.959766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.959905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.959932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.960073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.960191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.960219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.960394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.960552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.960613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.960796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.960939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.960967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.961115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.961281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.961309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.961479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.961634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.961674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.961789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.961927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.961954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.962123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.962256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.962280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.962434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.962596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.962619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.962778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.962917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.476 [2024-04-26 15:10:25.962944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.476 qpair failed and we were unable to recover it. 00:29:40.476 [2024-04-26 15:10:25.963089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.963233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.963261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.963429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.963553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.963576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.963733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.963901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.963929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.964074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.964243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.964271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.964415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.964535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.964557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.964708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.964831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.964860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.965001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.965152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.965179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.965340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.965482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.965505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.965691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.965840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.965868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.966008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.966153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.966179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.966355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.966479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.966516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.966659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.966765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.966792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.966963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.967130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.967159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.967339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.967472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.967509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.967679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.967848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.967875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.968043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.968191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.968219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.968378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.968509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.968531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.968715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.968867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.968894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.969038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.969178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.969206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.969327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.969439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.969461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.969642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.969815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.969843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.970009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.970197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.970225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.970399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.970584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.970644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.970765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.970933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.970962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.971124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.971261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.971285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.971469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.971600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.477 [2024-04-26 15:10:25.971638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.477 qpair failed and we were unable to recover it. 00:29:40.477 [2024-04-26 15:10:25.971755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.971904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.971932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.972077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.972249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.972277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.972452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.972572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.972595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.972786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.972956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.972983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.973133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.973245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.973273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.973418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.973555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.973577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.973737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.973883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.973910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.974058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.974228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.974255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.974428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.974550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.974573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.974749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.974916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.974944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.975066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.975218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.975245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.975420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.975522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.975544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.975689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.975832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.975859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.975971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.976118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.976147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.976271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.976429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.976451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.976585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.976724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.976751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.976928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.977068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.977097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.977246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.977384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.977405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.977559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.977730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.977758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.977900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.978066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.978095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.978216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.978357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.978394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.978492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.978627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.978655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.978794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.978910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.978937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.979113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.979269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.979292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.979437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.979587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.979614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.979731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.979870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.979899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.980050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.980207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.980231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.980377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.980520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.980547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.478 qpair failed and we were unable to recover it. 00:29:40.478 [2024-04-26 15:10:25.980713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.478 [2024-04-26 15:10:25.980854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.980882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.981060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.981223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.981247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.981390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.981514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.981542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.981680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.981824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.981851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.981989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.982145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.982169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.982334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.982480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.982508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.982629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.982822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.982850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.983013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.983159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.983197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.983367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.983507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.983534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.983674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.983795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.983824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.983947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.984056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.984082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.984229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.984368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.984396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.984543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.984684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.984712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.984874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.985093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.985119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.985265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.985419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.985447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.985618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.985766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.985793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.985976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.986159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.986185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.986357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.986466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.986494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.986638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.986804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.986831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.986999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.987193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.987218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.987361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.987498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.987526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.987671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.987838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.987865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.988040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.988182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.988223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.988394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.988558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.988591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.479 qpair failed and we were unable to recover it. 00:29:40.479 [2024-04-26 15:10:25.988782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.988897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.479 [2024-04-26 15:10:25.988924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.989103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.989236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.989261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.989428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.989590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.989618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.989761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.989927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.989954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.990128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.990267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.990292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.990410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.990582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.990609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.990749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.990887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.990923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.991082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.991241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.991283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.991419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.991588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.991615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.991796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.991936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.991964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.992118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.992252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.992279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.992465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.992636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.992692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.992850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.993028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.993058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.993228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.993376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.993414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.993534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.993716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.993744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.993896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.994065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.994098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.994231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.994355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.994376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.994520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.994685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.994714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.994852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.994996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.995028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.995196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.995368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.995409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.995585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.995720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.995748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.480 [2024-04-26 15:10:25.995891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.996034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.480 [2024-04-26 15:10:25.996062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.480 qpair failed and we were unable to recover it. 00:29:40.481 [2024-04-26 15:10:25.996201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.996347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.996385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.481 qpair failed and we were unable to recover it. 00:29:40.481 [2024-04-26 15:10:25.996556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.996699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.996727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.481 qpair failed and we were unable to recover it. 00:29:40.481 [2024-04-26 15:10:25.996847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.996990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.997017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.481 qpair failed and we were unable to recover it. 00:29:40.481 [2024-04-26 15:10:25.997194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.997340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.997366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.481 qpair failed and we were unable to recover it. 00:29:40.481 [2024-04-26 15:10:25.997519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.997637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.997666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.481 qpair failed and we were unable to recover it. 00:29:40.481 [2024-04-26 15:10:25.997807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.997951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.997980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.481 qpair failed and we were unable to recover it. 00:29:40.481 [2024-04-26 15:10:25.998137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.998297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.998335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.481 qpair failed and we were unable to recover it. 00:29:40.481 [2024-04-26 15:10:25.998496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.998659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.998686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.481 qpair failed and we were unable to recover it. 00:29:40.481 [2024-04-26 15:10:25.998802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.998970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.998998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.481 qpair failed and we were unable to recover it. 00:29:40.481 [2024-04-26 15:10:25.999174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.999326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.999349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.481 qpair failed and we were unable to recover it. 00:29:40.481 [2024-04-26 15:10:25.999535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.999677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:25.999704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.481 qpair failed and we were unable to recover it. 00:29:40.481 [2024-04-26 15:10:25.999871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:26.000006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:26.000043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.481 qpair failed and we were unable to recover it. 00:29:40.481 [2024-04-26 15:10:26.000191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:26.000332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:26.000356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.481 qpair failed and we were unable to recover it. 00:29:40.481 [2024-04-26 15:10:26.000482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:26.000620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:26.000652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.481 qpair failed and we were unable to recover it. 00:29:40.481 [2024-04-26 15:10:26.000767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:26.000904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:26.000932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.481 qpair failed and we were unable to recover it. 00:29:40.481 [2024-04-26 15:10:26.001074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:26.001209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.481 [2024-04-26 15:10:26.001235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.001412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.001574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.001602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.001743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.001882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.001909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.002077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.002210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.002236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.002382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.002491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.002519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.002660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.002801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.002831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.003028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.003199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.003235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.003415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.003559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.003587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.003731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.003897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.003929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.004075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.004213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.004238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.004389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.004543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.004570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.004671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.004781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.004809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.004954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.005092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.005121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.005257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.005397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.005420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.005547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.005687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.005715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.005858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.005968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.005995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.006175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.006345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.006373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.006513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.006637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.006660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.006781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.006947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.006975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.007092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.007254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.007280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.007411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.007551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.007579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.482 [2024-04-26 15:10:26.007697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.007840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.482 [2024-04-26 15:10:26.007863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.482 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.008047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.008148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.008176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.008310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.008446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.008474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.008639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.008782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.008809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.008934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.009100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.009126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.009288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.009423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.009451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.009584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.009721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.009750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.009920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.010092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.010121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.010274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.010427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.010467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.010635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.010787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.010814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.010954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.011088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.011117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.011259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.011394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.011422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.011600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.011727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.011750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.011905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.012017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.012051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.012223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.012354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.012382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.012490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.012657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.012685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.012828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.012955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.012978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.013155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.013308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.013337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.013486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.013649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.013677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.013844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.013982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.014009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.014145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.014327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.014350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.014505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.014660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.014718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.014883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.015048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.015077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.015220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.015361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.483 [2024-04-26 15:10:26.015389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.483 qpair failed and we were unable to recover it. 00:29:40.483 [2024-04-26 15:10:26.015569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.015672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.015694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.015806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.015944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.015972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.016088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.016252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.016278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.016473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.016642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.016670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.016851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.016957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.016979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.017162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.017302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.017330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.017472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.017582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.017609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.017755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.017924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.017952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.018105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.018206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.018231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.018418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.018638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.018686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.018789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.018927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.018956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.019109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.019256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.019285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.019437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.019566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.019589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.019709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.019878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.019905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.020058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.020197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.020230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.020373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.020510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.020538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.020707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.020833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.020856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.021012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.021217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.021246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.021413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.021590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.021618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.021787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.021907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.021936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.022060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.022216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.022241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.022416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.022661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.022707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.022883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.023052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.023081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.023263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.023450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.023508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.023651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.023781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.023804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.023919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.024101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.024129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.024269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.024408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.024436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.484 [2024-04-26 15:10:26.024577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.024685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.484 [2024-04-26 15:10:26.024713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.484 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.024860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.024987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.025041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.025166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.025302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.025330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.025477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.025648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.025676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.025815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.025922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.025950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.026095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.026237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.026262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.026405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.026578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.026631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.026784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.026925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.026953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.027097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.027231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.027259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.027446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.027540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.027562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.027684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.027829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.027857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.027971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.028147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.028175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.028316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.028467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.028495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.028671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.028845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.028873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.028978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.029136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.029165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.029309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.029445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.029473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.029647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.029786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.029814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.029970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.030121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.030161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.030296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.030477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.030530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.030676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.030841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.030869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.031039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.031211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.031238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.031390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.031486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.031508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.031682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.031818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.031846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.032013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.032156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.032184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.032330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.032497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.032524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.032650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.032782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.032804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.032921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.033069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.033098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.033267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.033380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.485 [2024-04-26 15:10:26.033408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.485 qpair failed and we were unable to recover it. 00:29:40.485 [2024-04-26 15:10:26.033546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.033711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.033738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.033914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.034083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.034122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.034298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.034432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.034488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.034629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.034772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.034799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.034914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.035046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.035075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.035246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.035374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.035397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.035552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.035688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.035715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.035882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.036025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.036054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.036193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.036360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.036388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.036520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.036680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.036702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.036825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.036986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.037014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.037168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.037303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.037331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.037449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.037589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.037617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.037754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.037909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.037932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.038085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.038255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.038283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.038455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.038559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.038587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.038693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.038837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.038865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.039005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.039133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.039156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.039297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.039444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.039472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.039642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.039811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.039839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.039980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.040116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.040146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.040270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.040442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.040464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.040639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.040806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.040834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.040999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.041131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.041159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.041304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.041472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.041500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.486 qpair failed and we were unable to recover it. 00:29:40.486 [2024-04-26 15:10:26.041669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.041794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.486 [2024-04-26 15:10:26.041816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.042006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.042164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.042192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.042357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.042454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.042482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.042621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.042760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.042788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.042961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.043097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.043140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.043272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.043400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.043449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.043591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.043729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.043757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.043860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.043974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.044002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.044174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.044336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.044359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.044509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.044644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.044672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.044840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.045008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.045051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.045230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.045415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.045475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.045653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.045752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.045774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.045905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.046017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.046051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.046219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.046394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.046455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.046604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.046742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.046770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.046903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.047062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.047086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.047240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.047379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.047407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.047548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.047690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.047718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.047887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.048026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.048055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.048227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.048371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.048409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.048555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.048716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.048744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.048890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.049000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.049034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.049201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.049316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.049344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.049470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.049629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.049655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.049785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.049903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.049930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.050078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.050244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.050272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.487 [2024-04-26 15:10:26.050411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.050551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.487 [2024-04-26 15:10:26.050579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.487 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.050721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.050848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.050871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.051054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.051239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.051300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.051439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.051605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.051633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.051775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.051920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.051948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.052094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.052243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.052266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.052439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.052594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.052653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.052791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.052963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.052995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.053151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.053341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.053406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.053580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.053751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.053779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.053918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.054033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.054061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.054200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.054338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.054366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.054508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.054621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.054649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.054827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.054960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.054982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.055154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.055293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.055321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.055457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.055622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.055650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.055792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.055932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.055960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.056139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.056300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.056350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.056510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.056695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.056741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.056892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.057061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.057089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.057257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.057399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.057427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.057546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.057674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.057697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.057870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.058053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.058100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.058267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.058403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.058431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.058569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.058706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.058734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.058886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.058994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.059016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.059172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.059305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.059333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.059469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.059636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.059664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.059813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.059956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.059984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.488 qpair failed and we were unable to recover it. 00:29:40.488 [2024-04-26 15:10:26.060162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.060330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.488 [2024-04-26 15:10:26.060353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.060540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.060673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.060700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.060843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.060948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.060976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.061109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.061273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.061301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.061423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.061600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.061623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.061779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.061930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.061958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.062113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.062215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.062238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.062351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.062511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.062539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.062658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.062786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.062811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.062974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.063124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.063152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.063297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.063461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.063489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.063658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.063767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.063795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.063961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.064106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.064130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.064283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.064434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.064486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.064597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.064742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.064770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.064939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.065079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.065107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.065292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.065475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.065531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.065674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.065863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.065891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.066033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.066198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.066226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.066379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.066515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.066543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.066720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.066888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.066916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.067034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.067142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.067170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.067348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.067542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.067592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.067765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.067941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.067970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.068142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.068272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.068295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.068469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.068639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.068687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.068825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.068965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.068993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.069158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.069323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.069408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.069553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.069686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.069709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.489 qpair failed and we were unable to recover it. 00:29:40.489 [2024-04-26 15:10:26.069854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.489 [2024-04-26 15:10:26.070025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.070054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.070218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.070357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.070385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.070497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.070661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.070689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.070829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.070954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.070976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.071176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.071351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.071378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.071547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.071693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.071721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.071889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.072033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.072062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.072236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.072336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.072358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.072497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.072681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.072732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.072903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.073038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.073067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.073242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.073385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.073412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.073561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.073717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.073753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.073864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.074008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.074046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.074216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.074334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.074363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.074530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.074667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.074695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.074841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.075016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.075064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.075209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.075346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.075379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.075539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.075687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.075716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.075865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.075982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.076010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.076149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.076306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.076344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.076501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.076641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.076668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.076781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.076891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.076918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.077086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.077230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.077257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.077428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.077562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.077601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.077768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.077936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.077964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.078087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.078193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.078216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.078372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.078510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.078538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.078684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.078827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.078850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.078993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.079113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.490 [2024-04-26 15:10:26.079141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.490 qpair failed and we were unable to recover it. 00:29:40.490 [2024-04-26 15:10:26.079283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.079421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.079448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.079588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.079702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.079730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.079872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.080008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.080052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.080235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.080378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.080406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.080520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.080663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.080690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.080799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.080942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.080970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.081131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.081292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.081333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.081473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.081612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.081640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.081753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.081922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.081950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.082066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.082188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.082216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.082396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.082575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.082607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.082740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.082911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.082939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.083079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.083243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.083271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.083407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.083577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.083605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.083752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.083866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.083890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.084037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.084178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.084205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.084320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.084454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.084482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.084648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.084773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.084801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.084957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.085127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.085170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.085280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.085420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.085466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.085632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.085750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.085778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.085883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.086056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.086085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.086199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.086300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.086339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.086502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.086641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.086669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.491 qpair failed and we were unable to recover it. 00:29:40.491 [2024-04-26 15:10:26.086772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.491 [2024-04-26 15:10:26.086874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.086902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.087014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.087189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.087217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.087382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.087529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.087568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.087702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.087822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.087850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.087985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.088134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.088163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.088334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.088439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.088466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.088586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.088706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.088729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.088856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.088962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.088990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.089119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.089260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.089285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.089432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.089570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.089597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.089714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.089870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.089892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.090030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.090191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.090217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.090379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.090522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.090550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.090660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.090769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.090797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.090917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.091077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.091104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.091245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.091360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.091389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.091531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.091672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.091700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.091827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.091970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.091997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.092151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.092255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.092279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.092430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.092591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.092614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.092738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.092889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.092913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.093017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.093127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.093151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.093307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.093443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.093468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.093614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.093748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.093771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.093908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.094057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.094083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.094193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.094326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.094367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.094467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.094597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.094626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.094739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.094853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.094885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.094998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.095178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.492 [2024-04-26 15:10:26.095204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.492 qpair failed and we were unable to recover it. 00:29:40.492 [2024-04-26 15:10:26.095338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.095467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.095504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.095645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.095776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.095799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.095947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.096094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.096118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.096242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.096384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.096408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.096542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.096651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.096675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.096771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.096901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.096924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.097091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.097221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.097246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.097376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.097517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.097540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.097667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.097809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.097836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.097928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.098052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.098076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.098173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.098337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.098374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.098506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.098637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.098661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.098761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.098890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.098913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.099016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.099166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.099190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.099337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.099495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.099519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.099684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.099829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.099853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.099955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.100088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.100128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.100276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.100407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.100431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.100609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.100740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.100781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.100890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.101040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.101065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.101174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.101316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.101340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.101472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.101578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.101600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.101737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.101869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.101892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.102014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.102147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.102171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.102299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.102466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.102489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.102601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.102744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.102780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.102899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.103079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.103103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.103230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.103331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.103353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.103465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.103616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.493 [2024-04-26 15:10:26.103644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.493 qpair failed and we were unable to recover it. 00:29:40.493 [2024-04-26 15:10:26.103777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.103887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.103909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.104063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.104212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.104234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.104397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.104501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.104524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.104672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.104794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.104816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb8c000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.104965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.105115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.105142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.105285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.105397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.105436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.105569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.105727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.105751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.105886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.106052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.106076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.106193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.106356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.106392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.106546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.106655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.106678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.106860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.106973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.106996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.107154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.107258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.107282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.107410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.107559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.107582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.107720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.107856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.107881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.108061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.108215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.108262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.108430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.108614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.108666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.108845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.109042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.109091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.109261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.109443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.109491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.109658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.109828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.109873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.110028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.110192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.110238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.110406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.110571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.110618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.110784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.110968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.110996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.111264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.111423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.111467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.111646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.111811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.111841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.112009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.112167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.112214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.112345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.112491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.112522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.112695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.112841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.112886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.113070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.113227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.113258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.494 qpair failed and we were unable to recover it. 00:29:40.494 [2024-04-26 15:10:26.113382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.113564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.494 [2024-04-26 15:10:26.113594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.113745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.113891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.113934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.114111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.114233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.114265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.114456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.114608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.114638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.114806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.114956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.114999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.115200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.115385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.115415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.115553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.115671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.115716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.115880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.116030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.116077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.116198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.116353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.116399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.116590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.116726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.116755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.116903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.117068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.117101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.117257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.117424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.117454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.117654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.117829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.117871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.118026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.118188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.118233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.118399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.118593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.118636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.118762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.118910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.118940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.119134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.119271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.119301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.119465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.119589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.119620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.119780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.119945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.119988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.120136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.120292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.120324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.120496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.120626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.120656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.120820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.121009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.121063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.121223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.121371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.121414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.121589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.121739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.121768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.121908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.122055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.122088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.122253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.122383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.122413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.122619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.122756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.122786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.123000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.123143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.123174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.123312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.123489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.495 [2024-04-26 15:10:26.123519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.495 qpair failed and we were unable to recover it. 00:29:40.495 [2024-04-26 15:10:26.123697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.496 [2024-04-26 15:10:26.123873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.496 [2024-04-26 15:10:26.123923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.496 qpair failed and we were unable to recover it. 00:29:40.496 [2024-04-26 15:10:26.124115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.496 [2024-04-26 15:10:26.124262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.496 [2024-04-26 15:10:26.124295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.124445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.124588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.124631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.124788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.124955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.124985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.125144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.125352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.125396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.125525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.125643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.125673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.125805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.125925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.125968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.126130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.126300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.126331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.126531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.126648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.126679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.126796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.126948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.126978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.127178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.127336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.127366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.127538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.127706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.127740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.127898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.128054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.128086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.128263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.128432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.128457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.128595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.128696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.128718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.128864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.128966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.128989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.129144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.129249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.129287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.129443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.129569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.129592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.129688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.129809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.129831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.129980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.130112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.130138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.130320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.130487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.130510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.130624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.130774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.130797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.130933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.131087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.131126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.131255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.131379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.131402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.131544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.131694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.131717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.131845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.131979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.132000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.132122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.132240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.132263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.132398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.132526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.132549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.132691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.132778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.132802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.132940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.133073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.133109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.133223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.133351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.133375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:40.497 qpair failed and we were unable to recover it. 00:29:40.497 [2024-04-26 15:10:26.133502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.497 [2024-04-26 15:10:26.133615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.498 [2024-04-26 15:10:26.133637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.542393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.542555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.542585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.542734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.542891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.542937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.543067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.543185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.543212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.543336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.543478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.543504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.543655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.543806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.543832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.543960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.544097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.544134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.544247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.544407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.544434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.544582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.544693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.544719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.544823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.544961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.544990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.545160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.545337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.545360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.545482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.545604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.545632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.545769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.545876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.545904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.546049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.546190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.546218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.546356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.546524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.546564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.546697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.546801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.546829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.546932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.547074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.547103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.547240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.547375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.547403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.547524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.547661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.547684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.547811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.547939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.547968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.548121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.548242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.548266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.548414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.548576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.548604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.548768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.548893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.548916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.549065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.549187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.549217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.549345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.549535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.549564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.549698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.549860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.549888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.550043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.550200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.550240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.550376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.550475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.550503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.550682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.550902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.550931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.551095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.551232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.551261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.551457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.551551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.551574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.551716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.551824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.551852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.551991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.552106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.552136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.552275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.552409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.552437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.552562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.552687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.552710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.552858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.552967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.552996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.553117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.553217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.553241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.087 qpair failed and we were unable to recover it. 00:29:41.087 [2024-04-26 15:10:26.553384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.553547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.087 [2024-04-26 15:10:26.553576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.553714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.553838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.553861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.554007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.554145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.554174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.554303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.554465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.554493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.554597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.554755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.554783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.554917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.555024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.555063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.555181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.555348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.555377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.555544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.555679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.555707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.555840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.555968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.555996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.556161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.556317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.556357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.556459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.556616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.556645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.556783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.556925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.556954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.557055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.557222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.557251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.557380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.557524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.557548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.557684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.557819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.557847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.557950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.558111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.558140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.558252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.558353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.558385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.558555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.558706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.558748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.558912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.559056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.559086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.559195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.559307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.559335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.559507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.559663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.559691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.559808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.559908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.559931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.560105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.560240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.560269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.560367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.560500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.560528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.560689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.560819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.560847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.560980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.561126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.561153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.561295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.561438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.561466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.561604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.561747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.561776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.561912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.562049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.562079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.562245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.562405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.562447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.562612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.562722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.562750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.562881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.563038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.563067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.563212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.563349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.563401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.563534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.563687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.563709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.563879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.564016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.564050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.564187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.564348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.564377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.564508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.564641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.564670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.564788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.564914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.564937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.565079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.565210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.565238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.565403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.565529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.565557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.565692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.565824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.565852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.565985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.566120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.566144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.566327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.566460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.566488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.566648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.566785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.566813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.566953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.567131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.567160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.567291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.567396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.567419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.567534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.567694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.567723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.567860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.567996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.568029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.568208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.568341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.568369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.568510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.568661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.568683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.568853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.569011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.569052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.569222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.569379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.569447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.569628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.569786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.569814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.569979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.570121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.570159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.570292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.570425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.570454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.088 qpair failed and we were unable to recover it. 00:29:41.088 [2024-04-26 15:10:26.570616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.570755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.088 [2024-04-26 15:10:26.570783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.570942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.571086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.571146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.571279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.571386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.571409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.571581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.571682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.571711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.571815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.571956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.571984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.572155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.572286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.572315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.572480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.572634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.572671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.572831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.572968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.572997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.573162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.573300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.573324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.573526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.573691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.573741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.573879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.573997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.574049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.574168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.574302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.574331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.574472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.574573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.574606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.574771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.574871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.574900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.575032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.575195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.575219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.575369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.575501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.575529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.575692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.575795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.575823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.575931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.576085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.576114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.576253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.576379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.576402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.576544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.576678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.576706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.576832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.576937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.576965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.577078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.577214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.577242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.577374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.577493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.577515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.577693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.577831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.577859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.578024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.578160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.578188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.578333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.578504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.578532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.578668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.578789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.578812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.578926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.579061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.579091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.579254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.579414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.579442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.579585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.579740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.579769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.579901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.580024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.580063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.580196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.580296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.580324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.580463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.580596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.580624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.580773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.580872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.580900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.581032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.581178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.581202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.581323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.581482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.581510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.581640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.581774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.581802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.581914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.582052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.582081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.582234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.582366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.582389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.582488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.582601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.582629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.582764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.582902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.582930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.583085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.583218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.583246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.583413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.583539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.583561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.583708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.583827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.583855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.583991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.584158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.584187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.584329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.584456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.584484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.584604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.584722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.584744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.089 qpair failed and we were unable to recover it. 00:29:41.089 [2024-04-26 15:10:26.584858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.089 [2024-04-26 15:10:26.584991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.585028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.585176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.585287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.585315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.585450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.585584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.585612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.585762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.585890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.585912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.586094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.586217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.586245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.586379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.586544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.586572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.586699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.586836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.586868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.587009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.587155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.587179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.587323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.587456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.587485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.587587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.587694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.587722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.587882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.588014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.588049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.588222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.588346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.588368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.588497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.588627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.588656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.588789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.588946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.588974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.589083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.589192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.589220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.589386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.589511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.589534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.589712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.589844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.589872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.590036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.590143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.590171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.590356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.590529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.590579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.590744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.590891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.590929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.591068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.591173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.591201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.591336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.591499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.591527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.591662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.591794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.591822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.591966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.592071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.592094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.592262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.592393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.592422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.592582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.592744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.592772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.592912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.593026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.593061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.593210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.593363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.593386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.593512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.593668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.593696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.593801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.593962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.593990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.594099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.594200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.594228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.594400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.594547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.594584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.594719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.594884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.594912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.595050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.595187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.595216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.595359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.595566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.595616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.595757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.595851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.595874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.596017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.596183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.596212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.596376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.596504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.596533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.596637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.596813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.596842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.597005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.597204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.597233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.597371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.597488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.597516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.597679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.597914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.597942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.598075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.598251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.598279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.598496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.598667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.598718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.598849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.598996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.599032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.599170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.599305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.599333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.599488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.599707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.599735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.599855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.599958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.599981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.600134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.600352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.600380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.600511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.600673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.600701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.600834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.600949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.600978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.601161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.601312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.601348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.601505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.601683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.601711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.601841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.602075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.602105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.602250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.602438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.602488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.090 [2024-04-26 15:10:26.602682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.602805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.090 [2024-04-26 15:10:26.602846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.090 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.603008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.603214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.603243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.603378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.603510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.603543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.603708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.603887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.603915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.604048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.604183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.604207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.604352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.604497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.604525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.604657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.604876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.604905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.605045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.605196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.605225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.605345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.605483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.605506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.605709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.605869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.605897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.606071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.606200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.606228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.606383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.606529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.606592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.606744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.606904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.606943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.607083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.607219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.607248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.607382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.607485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.607513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.607618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.607739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.607767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.607940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.608049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.608073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.608186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.608320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.608348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.608459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.608591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.608619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.608753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.608860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.608888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.609007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.609155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.609179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.609409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.609577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.609605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.609778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.609935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.609963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.610128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.610288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.610316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.610421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.610573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.610596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.610741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.610846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.610874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.611042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.611152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.611181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.611421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.611589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.611617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.611758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.611875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.611898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.612052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.612188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.612216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.612354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.612487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.612515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.612658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.612798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.612826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.612968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.613214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.613256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.613427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.613536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.613565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.613698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.613805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.613833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.614007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.614130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.614158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.614288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.614425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.614447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.614594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.614732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.614760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.614922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.615083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.615113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.615225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.615355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.615383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.615612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.615756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.615785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.615918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.616075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.616104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.616240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.616394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.616422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.616530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.616678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.616706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.616866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.616971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.616994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.617168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.617312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.617340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.617501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.617632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.617660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.617865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.617984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.618046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.618231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.618341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.618363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.618520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.618632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.618661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.618789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.618923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.618952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.619082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.619230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.619258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.619421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.619543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.619566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.619784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.619958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.619990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.620138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.620287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.620310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.620486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.620638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.620690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.091 qpair failed and we were unable to recover it. 00:29:41.091 [2024-04-26 15:10:26.620907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.621046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.091 [2024-04-26 15:10:26.621071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.621276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.621461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.621519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.621656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.621856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.621884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.621995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.622139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.622168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.622308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.622412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.622435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.622574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.622734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.622762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.622866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.623001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.623037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.623173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.623283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.623311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.623443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.623583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.623605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.623753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.623911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.623939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.624079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.624226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.624255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.624400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.624543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.624571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.624704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.624833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.624855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.625012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.625162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.625191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.625300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.625463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.625491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.625656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.625762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.625790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.625996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.626211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.626240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.626345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.626537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.626565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.626674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.626812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.626840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.626971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.627161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.627190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.627378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.627566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.627625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.627771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.627904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.627933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.628082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.628193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.628221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.628358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.628549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.628577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.628708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.628858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.628881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.629032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.629169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.629197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.629367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.629502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.629531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.629662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.629791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.629819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.629959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.630090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.630114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.630228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.630356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.630393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.630557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.630668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.630696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.630800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.630927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.630954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.631091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.631189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.631213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.631328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.631459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.631487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.631628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.631766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.631795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.631950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.632076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.632106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.632237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.632368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.632405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.632541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.632700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.632728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.632832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.632963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.632998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.633120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.633238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.633263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.633415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.633563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.633604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.633743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.633849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.633876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.633977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.634142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.634170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.634333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.634460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.634512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.634654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.634777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.634800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.092 qpair failed and we were unable to recover it. 00:29:41.092 [2024-04-26 15:10:26.634915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.092 [2024-04-26 15:10:26.635062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.635091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.635202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.635333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.635361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.635520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.635657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.635685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.635847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.635944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.635967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.636122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.636254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.636282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.636443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.636580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.636608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.636739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.636875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.636903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.637032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.637202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.637226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.637352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.637487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.637515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.637652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.637762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.637790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.637948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.638079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.638108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.638278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.638397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.638419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.638600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.638733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.638761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.638896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.639029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.639058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.639224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.639359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.639387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.639522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.639647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.639670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.639810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.639979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.640007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.640149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.640307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.640335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.640476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.640626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.640655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.640756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.640850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.640872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.641008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.641135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.641164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.641273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.641435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.641463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.641625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.641756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.641783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.641947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.642124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.642154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.642293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.642395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.642423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.642583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.642744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.642772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.642904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.643043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.643073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.643214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.643341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.643363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.643479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.643604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.643632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.643774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.643910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.643938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.644088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.644280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.644309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.644469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.644563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.644585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.644760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.644923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.644951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.645115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.645217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.645245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.645407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.645594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.645646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.645783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.645929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.645952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.646074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.646234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.646262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.646396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.646533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.646561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.646664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.646801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.646829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.646963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.647055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.647079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.647196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.647339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.647367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.647481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.647613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.647641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.093 qpair failed and we were unable to recover it. 00:29:41.093 [2024-04-26 15:10:26.647802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.647932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.093 [2024-04-26 15:10:26.647961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.648094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.648207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.648231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.648405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.648566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.648599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.648735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.648881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.648909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.649040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.649175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.649203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.649312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.649476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.649498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.649610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.649749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.649777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.649917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.650049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.650078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.650242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.650403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.650431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.650596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.650721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.650744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.650918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.651055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.651084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.651196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.651328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.651356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.651500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.651690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.651718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.651885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.652041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.652065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.652214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.652388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.652416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.652551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.652658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.652686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.652848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.652985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.653012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.653141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.653231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.653254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.653437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.653618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.653679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.653819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.653925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.653953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.654087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.654195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.654223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.654392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.654536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.654575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.654677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.654812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.654840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.654979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.655095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.655124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.655236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.655338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.655366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.655483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.655621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.655643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.655760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.655918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.655946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.656084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.656245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.656274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.656435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.656560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.656610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.656747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.656916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.656938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.657053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.657186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.657215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.657377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.657546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.657602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.657731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.657856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.657884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.658025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.658174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.658197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.658347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.658505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.658533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.658691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.658855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.658883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.659025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.659163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.659191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.659349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.659502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.659540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.659703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.659831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.659860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.659993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.660137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.660166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.660326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.660476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.094 [2024-04-26 15:10:26.660535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.094 qpair failed and we were unable to recover it. 00:29:41.094 [2024-04-26 15:10:26.660705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.660804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.660827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.661007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.661155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.661184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.661306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.661473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.661501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.661639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.661775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.661803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.661939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.662080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.662104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.662246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.662375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.662403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.662565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.662670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.662697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.662832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.662970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.662998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.663122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.663242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.663266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.663406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.663513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.663541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.663690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.663825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.663853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.663990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.664113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.664142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.664289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.664444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.664470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.664654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.664816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.664844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.664980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.665125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.665155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.665296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.665438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.665489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.665590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.665715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.665738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.665876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.665980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.666008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.666155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.666319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.666347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.666509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.666644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.666672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.666833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.666981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.667026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.667149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.667257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.667285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.667455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.667562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.667590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.667715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.667844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.667873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.668049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.668140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.668164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.668342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.668576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.668619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.668766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.668873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.668901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.669084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.669233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.669261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.669417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.669514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.669537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.669656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.669787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.669815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.669945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.670055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.670084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.670229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.670386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.670437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.670545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.670672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.670694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.670904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.671066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.095 [2024-04-26 15:10:26.671095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.095 qpair failed and we were unable to recover it. 00:29:41.095 [2024-04-26 15:10:26.671305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.671422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.671450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.671608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.671739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.671767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.671910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.672006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.672049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.672196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.672381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.672409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.672609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.672741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.672769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.672938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.673065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.673096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.673239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.673408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.673446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.673579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.673709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.673737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.673901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.674102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.674131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.674298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.674457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.674486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.674688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.674902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.674930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.675092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.675272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.675300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.675499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.675728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.675779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.675981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.676130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.676159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.676265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.676394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.676416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.676562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.676668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.676696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.676830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.676964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.676993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.677142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.677308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.677336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.677533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.677758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.677786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.677924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.678052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.678085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.678253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.678388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.678416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.678554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.678694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.678723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.678888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.679006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.679048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.679182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.679314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.679342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.679484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.679612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.679640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.679778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.679902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.679930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.680087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.680179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.680203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.680332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.680436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.680464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.680612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.680805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.680833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.681031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.681172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.681201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.681430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.681562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.681584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.681722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.681889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.681917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.682045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.682187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.682216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.682329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.682496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.682536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.682751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.682870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.682911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.683073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.683175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.683203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.683317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.683457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.683485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.683586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.683698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.683726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.683849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.684014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.684056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.684197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.684324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.684352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.684517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.684675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.684703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.096 [2024-04-26 15:10:26.684864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.684994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.096 [2024-04-26 15:10:26.685030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.096 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.685207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.685301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.685338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.685452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.685608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.685636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.685798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.685935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.685963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.686103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.686238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.686266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.686373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.686474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.686497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.686665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.686796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.686824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.686987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.687127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.687156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.687316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.687450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.687478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.687653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.687772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.687794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.687938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.688074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.688103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.688276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.688443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.688501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.688648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.688808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.688836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.688957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.689055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.689080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.689228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.689364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.689392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.689555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.689700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.689729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.689869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.689980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.690044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.690171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.690270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.690293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.690399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.690560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.690588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.690715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.690819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.690847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.690980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.691125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.691153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.691328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.691435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.691458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.691607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.691737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.691766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.691895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.692002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.692037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.692168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.692303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.692331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.692465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.692586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.692609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.692725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.692890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.692918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.693049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.693152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.693180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.693282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.693440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.693468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.693579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.693705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.693731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.693879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.693988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.694016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.694136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.694268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.694296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.694455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.694588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.694616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.694781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.694886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.694908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.695054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.695172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.695200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.695306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.695411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.695439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.695574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.695700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.695728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.695850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.695969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.695991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.696165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.696324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.696352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.696489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.696624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.696652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.696817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.696952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.696980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.697099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.697220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.697244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.097 qpair failed and we were unable to recover it. 00:29:41.097 [2024-04-26 15:10:26.697419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.097 [2024-04-26 15:10:26.697556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.697584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.697720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.697851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.697879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.698041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.698167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.698195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.698334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.698473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.698495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.698618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.698756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.698784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.698950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.699078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.699107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.699271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.699445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.699496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.699637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.699766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.699788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.699907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.700044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.700073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.700179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.700339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.700367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.700510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.700668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.700696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.700858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.701004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.701035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.701154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.701292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.701320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.701455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.701591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.701619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.701785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.701922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.701949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.702075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.702216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.702239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.702378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.702526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.702554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.702688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.702820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.702848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.702989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.703140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.703169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.703343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.703487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.703528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.703657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.703792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.703820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.703922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.704050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.704079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.704213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.704364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.704416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.704549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.704644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.704666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.704806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.704977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.705005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.705159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.705322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.705350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.705509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.705675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.705704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.705817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.705939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.705962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.706133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.706308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.706341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.706488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.706647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.706676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.706811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.706942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.706970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.707113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.707204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.707227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.707418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.707518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.707546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.707681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.707842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.707869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.708033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.708172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.708200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.708369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.708498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.708521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.708634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.708777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.708805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.708916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.709086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.709117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.709283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.709421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.709453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.709564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.709712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.709735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.709884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.710023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.710053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.710184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.710284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.710312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.710418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.710527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.710555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.710659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.710764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.710786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.710957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.711115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.711155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.098 [2024-04-26 15:10:26.711321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.711453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.098 [2024-04-26 15:10:26.711481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.098 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.711621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.711730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.711758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.711917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.712023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.712061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.712198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.712347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.712376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.712523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.712625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.712653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.712783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.712886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.712914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.713087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.713217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.713240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.713378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.713505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.713533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.713662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.713796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.713824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.713959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.714114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.714143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.714258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.714404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.714427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.714574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.714736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.714765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.714899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.715010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.715045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.715179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.715276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.715305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.715479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.715602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.715624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.715759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.715923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.715951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.716092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.716225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.716253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.716397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.716581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.716635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.716775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.716928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.716950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.717074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.717239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.717267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.717404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.717564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.717592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.717724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.717893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.717921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.718091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.718199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.718222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.718374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.718534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.718563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.718671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.718805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.718833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.718966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.719074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.719103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.719240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.719341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.719378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.719479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.719622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.719650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.719784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.719897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.719925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.720086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.720245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.720273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.720443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.720548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.720570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.720706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.720866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.720894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.720991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.721143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.721172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.721311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.721443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.721471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.721587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.721739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.721765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.721948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.722074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.722103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.722270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.722376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.722404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.722532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.722664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.722693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.722853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.722977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.723000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.723123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.723287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.723315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.723488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.723620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.723648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.723808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.723944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.723972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.724147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.724277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.724300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.724444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.724545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.724573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.724708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.724867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.724895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.725003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.725146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.725176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.725329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.725430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.725453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.725596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.725734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.725762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.725894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.726051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.726081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.726185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.726343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.726371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.726506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.726660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.726683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.726836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.727001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.727037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.727179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.727340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.727369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.727541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.727742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.727794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.727932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.728043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.728067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.099 [2024-04-26 15:10:26.728191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.728339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.099 [2024-04-26 15:10:26.728367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.099 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.728527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.728629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.728656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.728769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.728903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.728931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.729081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.729236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.729259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.729395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.729555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.729584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.729688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.729847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.729876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.730004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.730142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.730170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.730310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.730475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.730512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.730647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.730764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.730792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.730899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.731008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.731042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.731184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.731326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.731354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.731467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.731592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.731614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.731755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.731886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.731914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.732025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.732129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.732157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.732290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.732399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.732427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.732556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.732653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.732676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.732854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.733014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.733057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.733196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.733296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.733324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.733464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.733599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.733627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.733761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.733887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.733909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.734078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.734220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.734248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.734383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.734515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.734543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.734704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.734836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.734864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.734998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.735164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.735188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.735391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.735527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.735555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.735720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.735852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.735880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.736015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.736160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.736188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.736325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.736466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.736489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.736718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.736897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.736925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.737151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.737296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.737324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.737458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.737600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.737659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.737797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.737958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.737981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.738206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.738358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.738386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.738549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.738708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.738736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.738899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.739070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.739098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.739242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.739351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.739389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.739546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.739683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.739711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.739847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.739982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.740010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.740176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.740314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.740342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.740515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.740629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.740652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.740794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.740952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.740980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.741155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.741290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.741331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.741514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.741651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.741696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.741853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.742030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.742057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.742212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.742402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.742430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.742564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.742756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.742784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.742920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.743042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.743070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.743240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.743381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.743404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.743529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.743694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.743723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.743853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.743986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.744014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.744152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.744280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.100 [2024-04-26 15:10:26.744308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.100 qpair failed and we were unable to recover it. 00:29:41.100 [2024-04-26 15:10:26.744455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.744612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.744635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.744784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.744944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.744972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.745150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.745277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.745317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.745430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.745561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.745589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.745715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.745827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.745850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.745995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.746139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.746165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.746287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.746428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.746455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.746590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.746689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.746716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.746827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.746955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.746993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.747123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.747253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.747281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.747384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.747523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.747552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.747714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.747848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.747875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.747980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.748120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.748147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.748273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.748399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.748427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.748555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.748721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.748749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.748856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.748990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.749027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.749180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.749318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.749343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.749487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.749618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.749646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.749785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.749903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.749931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.750096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.750219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.750247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.750394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.750491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.750517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.750655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.750818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.750846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.750976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.751126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.751155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.751317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.751432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.751460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.751591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.751690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.751714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.751853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.751955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.751983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.752136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.752275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.752304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.752440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.752585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.752614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.752727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.752860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.752889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.753042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.753207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.753236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.753349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.753447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.753473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.753592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.753708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.753736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.753894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.754043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.754072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.754199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.754333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.754361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.754488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.754608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.754631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.754770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.754872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.754901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.755082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.755185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.755211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.755340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.755475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.755503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.755619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.755769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.755792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.755939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.756051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.756080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.756213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.756351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.756380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.756532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.756642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.756670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.756773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.756864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.756888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.757033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.757198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.757223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.757362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.757504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.757533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.757661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.757790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.757819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.757916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.758056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.758099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.758234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.758391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.758455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.758561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.758687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.758709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.758823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.758926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.758953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.759102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.759218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.759243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.759385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.759518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.759546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.759684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.759784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.759807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.759949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.760106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.760132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.760257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.760401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.760429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.101 qpair failed and we were unable to recover it. 00:29:41.101 [2024-04-26 15:10:26.760566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.101 [2024-04-26 15:10:26.760697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.760726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.760856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.760952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.760974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.761098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.761200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.761226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.761938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.762124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.762152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.762256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.762406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.762434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.762587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.762741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.762764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.762906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.763051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.763095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.763226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.763395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.763424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.763557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.763675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.763703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.763830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.763949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.763971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.764120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.764225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.764251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.764361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.764466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.764495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.764615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.764714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.764742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.764877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.765025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.765057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.765217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.765349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.765391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.765554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.765680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.765709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.765840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.765977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.766016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.766173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.766287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.766327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.766438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.766609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.766637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.766798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.766932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.766960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.767101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.767264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.767306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.767455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.767566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.767589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.767734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.767879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.767903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.768066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.768222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.768247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.768361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.768540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.768569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.768688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.768815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.768853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.768971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.769098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.769125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.769233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.769387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.769416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.769551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.769684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.769712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.769832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.769990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.770036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.770140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.770245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.770271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.770444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.770547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.770575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.770739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.770872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.770900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.771059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.771164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.771190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.771348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.771486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.771514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.771652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.771785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.771813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.771951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.772098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.772124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.772230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.772392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.772416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.772560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.772665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.772694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.772822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.772982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.773010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.773161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.773293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.773333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.773523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.773654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.773678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.773826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.773925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.773953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.774095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.774225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.774251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.774401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.774598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.774631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.774788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.774901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.774925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.775070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.775188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.775214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.775367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.775540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.775573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.775693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.775797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.775825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.775993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.776141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.776167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.776323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.776468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.776496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.776634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.776742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.776770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.776903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.777049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.777096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.777235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.777347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.777369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.777511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.777605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.777633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.102 [2024-04-26 15:10:26.777793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.777928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.102 [2024-04-26 15:10:26.777956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.102 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.778078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.778210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.778236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.778397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.778528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.778567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.778740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.778845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.778872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.779037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.779158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.779184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.779283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.779471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.779517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.779677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.779828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.779867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.780009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.780131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.780156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.780297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.780467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.780491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.780600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.780745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.780773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.780899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.781034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.781060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.781159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.781258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.781283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.781398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.781530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.781562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.781704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.781844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.781873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.782058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.782189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.782215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.782342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.782478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.782507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.782678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.782842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.782870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.783005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.783148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.783174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.783318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.783461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.783485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.783636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.783777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.783805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.783939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.784085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.784111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.784245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.784351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.784375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.784560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.784691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.784716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.784864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.784997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.785030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.785178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.785282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.785307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.785453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.785561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.785590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.785761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.785881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.785905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.786047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.786200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.786225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.786382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.786546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.786575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.786714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.786848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.786876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.786992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.787155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.787180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.787283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.787440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.787468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.787578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.787711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.787739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.787846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.787964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.787992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.788140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.788240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.788265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.788374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.788489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.788513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.788697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.788827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.788853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.788980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.789103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.789129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.789222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.789323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.789348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.789453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.789612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.789640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.789772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.789902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.789930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.790040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.790147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.790172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.790288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.790463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.790487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.790631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.790772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.790801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.790912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.791033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.791062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.791207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.791330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.791354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.791488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.791630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.791655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.791772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.791904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.791932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.792085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.792180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.792205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.792303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.792456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.792484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.792607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.793343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.793387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.103 qpair failed and we were unable to recover it. 00:29:41.103 [2024-04-26 15:10:26.793524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.793643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.103 [2024-04-26 15:10:26.793671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.793769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.793905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.793934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.794048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.794173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.794203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.794341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.794483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.794507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.794662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.794761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.794786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.794890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.795016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.795047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.795177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.795289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.795334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.795451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.795575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.795616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.795744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.795906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.795934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.796059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.796163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.796188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.796285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.796455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.796483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.796643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.796750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.796778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.796905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.797063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.797093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.797195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.797298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.797339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.797473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.797579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.797607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.797748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.797877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.797905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.798047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.798166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.798192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.798295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.798436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.798465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.798595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.798755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.798783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.798901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.799075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.799101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.799230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.799350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.799392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.799528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.799675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.799703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.799839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.799984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.800013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.800158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.800288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.800313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.800462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.800595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.800624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.800759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.800860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.800888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.801028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.801160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.801186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.801291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.801413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.801437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.801555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.801668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.801696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.801806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.801913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.801941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.802076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.802180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.802206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.802363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.802466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.802491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.802636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.802779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.802807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.802943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.803075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.803102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.803235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.803351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.803379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.803525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.803666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.803691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.803844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.803950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.803978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.804111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.804216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.804241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.804361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.804525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.104 [2024-04-26 15:10:26.804553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.104 qpair failed and we were unable to recover it. 00:29:41.104 [2024-04-26 15:10:26.804658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.804805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.804833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.804943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.805087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.805113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.805269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.805399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.805424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.805575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.805695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.805723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.805872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.806851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.806895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.807081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.807217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.807244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.807368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.807485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.807509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.807659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.807787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.807813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.807949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.808075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.808101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.808215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.808325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.808353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.808494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.808626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.808654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.808783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.808915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.808943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.809053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.809149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.809174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.809316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.809449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.809478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.809620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.809784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.809816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.809924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.810087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.810114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.810245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.810364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.810403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.810554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.810662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.810690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.810825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.810932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.810960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.811098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.811207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.811233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.407 qpair failed and we were unable to recover it. 00:29:41.407 [2024-04-26 15:10:26.811351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.407 [2024-04-26 15:10:26.811496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.811521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.811649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.811774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.811813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.811956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.812081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.812108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.812272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.812420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.812448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.812548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.812648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.812672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.812831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.812972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.813000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.813180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.813288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.813314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.813465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.813610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.813638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.813797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.813932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.813960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.814089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.814225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.814250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.814356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.814469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.814497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.814635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.814765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.814793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.814930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.815077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.815104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.815235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.815377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.815406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.815533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.815636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.815663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.815813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.815946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.815974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.816130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.816268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.816294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3906966 Killed "${NVMF_APP[@]}" "$@" 00:29:41.408 [2024-04-26 15:10:26.816430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.816538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.816566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 15:10:26 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:29:41.408 [2024-04-26 15:10:26.816681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.816819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 15:10:26 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:41.408 [2024-04-26 15:10:26.816848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 15:10:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:41.408 [2024-04-26 15:10:26.816956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.817065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.817107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with 15:10:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:41.408 addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.817224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:29:41.408 [2024-04-26 15:10:26.817338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.817363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.817479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.818439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.818472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.818634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.818756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.818785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.818921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.819080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.819107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.819220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.819335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.819360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.819489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.819619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.819648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.819784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.819918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.819947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.408 qpair failed and we were unable to recover it. 00:29:41.408 [2024-04-26 15:10:26.819993] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b20a0 (9): Bad file descriptor 00:29:41.408 [2024-04-26 15:10:26.820209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.408 [2024-04-26 15:10:26.820335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.820363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.820515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 15:10:26 -- nvmf/common.sh@470 -- # nvmfpid=3907469 00:29:41.409 [2024-04-26 15:10:26.820632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.820677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 15:10:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:41.409 15:10:26 -- nvmf/common.sh@471 -- # waitforlisten 3907469 00:29:41.409 [2024-04-26 15:10:26.820819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 15:10:26 -- common/autotest_common.sh@817 -- # '[' -z 3907469 ']' 00:29:41.409 [2024-04-26 15:10:26.820957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.820984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 15:10:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.409 [2024-04-26 15:10:26.821113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 15:10:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:41.409 [2024-04-26 15:10:26.821225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 15:10:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.409 [2024-04-26 15:10:26.821252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b9Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.409 0 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 15:10:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:41.409 [2024-04-26 15:10:26.821422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:29:41.409 [2024-04-26 15:10:26.821565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.821596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.821748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.821846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.821871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.822048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.822185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.822230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.822377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.822504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.822529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.822673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.822775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.822802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.822908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.823034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.823061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.823183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.823285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.823311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.823469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.823573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.823596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.823708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.823825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.823848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.823970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.824102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.824129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.824242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.824380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.824405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.825201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.825332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.825358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.825502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.825634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.825658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.825795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.825904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.825943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.826078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.826187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.826213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.826318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.826453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.826477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.826600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.826706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.826731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.826899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.827033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.827060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.827197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.827333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.827357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.827483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.827635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.827660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.827787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.827901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.827940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.828078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.828186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.828213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.409 [2024-04-26 15:10:26.828328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.828463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.409 [2024-04-26 15:10:26.828486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.409 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.828601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.828743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.828766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.828909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.829047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.829074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.829183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.829299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.829339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.829479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.829576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.829600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.829715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.829866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.829890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.830008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.830147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.830173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.830285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.830400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.830423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.830565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.830671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.830695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.830815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.830913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.830937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.831053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.831171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.831197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.831303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.831426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.831451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.831597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.831727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.831752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.831858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.831956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.831981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.832110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.832220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.832246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.832356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.832457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.832496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.832638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.832745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.832768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.832882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.832988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.833033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.833142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.833247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.833273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.833434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.833549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.833573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.833716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.833830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.833854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.834039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.834148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.834177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.834325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.834474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.834515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.834654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.834763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.834786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.834896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.835031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.835058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.835169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.835280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.835323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.835450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.835568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.835592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.835752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.835865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.835890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.836068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.836179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.836206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.410 qpair failed and we were unable to recover it. 00:29:41.410 [2024-04-26 15:10:26.836344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.410 [2024-04-26 15:10:26.836462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.836486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.836630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.836772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.836797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.836916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.837042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.837081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.837189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.837300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.837326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.837486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.837636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.837676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.837811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.837939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.837964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.838100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.838204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.838231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.838377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.838495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.838521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.838689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.838787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.838812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.838984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.839119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.839147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.839253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.839391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.839416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.839537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.839630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.839655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.839780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.839947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.839973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.840094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.840195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.840222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.840341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.840491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.840535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.840636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.840775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.840801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.840910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.841060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.841088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.841196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.841321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.841346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.841511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.841616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.841642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.841765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.841880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.841905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.842064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.842181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.842207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.842331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.842486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.842526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.842647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.842747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.842772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.842936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.843052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.411 [2024-04-26 15:10:26.843079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.411 qpair failed and we were unable to recover it. 00:29:41.411 [2024-04-26 15:10:26.843207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.843316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.843356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.843494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.843633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.843658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.843808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.843937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.843977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.844104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.844229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.844255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.844384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.844514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.844539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.844665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.844762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.844786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.844945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.845107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.845134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.845254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.845385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.845414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.845559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.845687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.845726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.845896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.846000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.846049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.846151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.846271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.846298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.846403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.846499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.846525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.846639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.846760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.846786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.846899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.847035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.847062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.847194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.847313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.847356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.847473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.847603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.847643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.847793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.847943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.847969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.848102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.848234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.848263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.848396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.848536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.848561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.848714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.848866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.848892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.849038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.849170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.849195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.849309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.849427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.849453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.849610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.849742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.849767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.849915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.850057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.850084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.850191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.850321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.850362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.850487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.850639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.850678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.850790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.850888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.850913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.851065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.851201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.851227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.412 [2024-04-26 15:10:26.851352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.851496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.412 [2024-04-26 15:10:26.851520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.412 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.851649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.851783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.851808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.851935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.852064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.852091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.852195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.852302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.852342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.852518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.852624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.852648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.852805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.852925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.852949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.853057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.853153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.853179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.853281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.853419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.853444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.853544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.853647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.853673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.853818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.853970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.854010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.854154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.854264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.854291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.854397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.854527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.854552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.854674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.854772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.854812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.854954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.855070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.855097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.855191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.855326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.855351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.855474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.855571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.855595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.855707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.855803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.855827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.855935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.856068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.856095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.856200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.856301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.856344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.856460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.856565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.856590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.856685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.856782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.856806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.856952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.857083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.857111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.857214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.857374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.857413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.857520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.857634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.857659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.857798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.857910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.857935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.858063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.858194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.858220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.858327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.858505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.858530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.858668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.858797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.858823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.858956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.859076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.859109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.413 qpair failed and we were unable to recover it. 00:29:41.413 [2024-04-26 15:10:26.859227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.413 [2024-04-26 15:10:26.859358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.859382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.859555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.859689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.859715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.859869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.860003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.860035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.860157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.860287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.860328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.860498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.860641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.860667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.860774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.860904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.860930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.861050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.861149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.861175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.861321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.861453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.861479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.861604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.861739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.861766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.861927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.862068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.862099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.862205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.862307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.862333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.862462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.862642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.862682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.862785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.862901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.862926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.863041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.863175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.863201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.863331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.863445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.863488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.863603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.863725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.863750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.863862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.863966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.863990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.864118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.864262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.864289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.864477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.864621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.864644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.864781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.864911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.864942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.865098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.865225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.865254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.865412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.865545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.865567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.865702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.865804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.865843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.865980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.866102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.866129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.866256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.866364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.866403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.866538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.866634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.866659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.866812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.866815] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:29:41.414 [2024-04-26 15:10:26.866888] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.414 [2024-04-26 15:10:26.866966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.866990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.867136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.867279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.867309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.414 qpair failed and we were unable to recover it. 00:29:41.414 [2024-04-26 15:10:26.867471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.414 [2024-04-26 15:10:26.867650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.867675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.867829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.867930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.867955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.868099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.868231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.868258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.868400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.868534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.868559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.868709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.868864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.868892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.869013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.869138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.869165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.869274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.869419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.869462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.869608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.869778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.869817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.869919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.870028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.870054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.870160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.870289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.870330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.870477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.870601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.870626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.870750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.870873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.870899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.871036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.871196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.871223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.871364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.871464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.871489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.871658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.871780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.871805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.871949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.872088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.872118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.872242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.872374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.872403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.872537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.872694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.872723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.872870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.872980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.873009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.873151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.873277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.873322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.873478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.873612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.873641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.873759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.873881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.873910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.874052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.874203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.874233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.874390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.874514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.874537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.874696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.874833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.874863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.875003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.875168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.875196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.875302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.875444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.875468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.875611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.875770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.875799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.415 [2024-04-26 15:10:26.875963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.876105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.415 [2024-04-26 15:10:26.876132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.415 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.876244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.876384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.876409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.876584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.876728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.876757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.876900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.877007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.877042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.877199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.877330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.877354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.877498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.877640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.877670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.877794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.877930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.877959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.878081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.878189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.878215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.878361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.878476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.878505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.878666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.878786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.878818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.878941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.879220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.879249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.879482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.879638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.879667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.879804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.879923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.879952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.880105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.880230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.880257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.880391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.880494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.880523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.880671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.880808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.880837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.881031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.881168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.881194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.881339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.881456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.881485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.881651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.881792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.881821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.881960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.882122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.882149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.882325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.882438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.882467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.882589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.882725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.882756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.882930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.883078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.883104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.883248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.883356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.883385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.883525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.883660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.883692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.416 qpair failed and we were unable to recover it. 00:29:41.416 [2024-04-26 15:10:26.883856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.883995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.416 [2024-04-26 15:10:26.884033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.884171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.884271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.884314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.884450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.884590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.884619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.884757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.884917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.884945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.885093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.885214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.885240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.885353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.885465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.885494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.885614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.885705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.885731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.885876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.886031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.886061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.886206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.886355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.886384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.886510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.886646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.886669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.886839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.887005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.887043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.887186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.887327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.887359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.887484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.887584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.887608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.887771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.887884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.887913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.888064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.888178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.888207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.888375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.888495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.888519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.888639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.888777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.888805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.888921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.889058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.889088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.889258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.889388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.889412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.889552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.889718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.889747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.889878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.890016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.890052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.890159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.890282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.890322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.890485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.890605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.890634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.890768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.890930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.890959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.891079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.891209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.891235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.891378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.891543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.891572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.891737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.891849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.891878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.891981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.892123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.892152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.892298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.892441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.892469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.417 qpair failed and we were unable to recover it. 00:29:41.417 [2024-04-26 15:10:26.892615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.417 [2024-04-26 15:10:26.892749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.892778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.892890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.893048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.893091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.893234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.893377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.893407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.893540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.893677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.893707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.893823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.893973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.894003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.894154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.894262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.894288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.894413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.894558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.894591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.894746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.894911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.894936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.895119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.895218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.895244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.895374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.895515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.895545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.895696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.895791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.895815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.895964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.896078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.896107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.896228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.896363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.896392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.896532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.896633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.896657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.896796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.896957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.896986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.897129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.897231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.897259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.897407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.897513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.897538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.897680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.897809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.897838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.897952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.898063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.898093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.898213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.898364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.898395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.898515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.898621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.898650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.898756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.898903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.898932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.899083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.899224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.899249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.899368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.899527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.899558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.899700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.899841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.899870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.900003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.900141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.900165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.900274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.900407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.900436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.900564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.900678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.900706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.418 [2024-04-26 15:10:26.900873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.901008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.418 [2024-04-26 15:10:26.901070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.418 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.901215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.901364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.901398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.901538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.901681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.901710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.901874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.902008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.902046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.902162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.902283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.902326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.902426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.902589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.902617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.902726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.902847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.902870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.903016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.903177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.903216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.903346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.903507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.903535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.903663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.903792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.903814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.903987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.904129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.904158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.904291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.904393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.904426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.904569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.904689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.904712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.904858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.905035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.905064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.905195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.905359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.905387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.905557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.905650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.905673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.905788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.905922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.905951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.906084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.906253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.906281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.906401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.906538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.906561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.906703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.906839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.906868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.906982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.419 [2024-04-26 15:10:26.907123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.907152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.907251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.907355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.907382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.907503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.907631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.907660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.907798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.907930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.907958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.908086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.908180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.908204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.908353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.908519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.908547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.908657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.908764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.908791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.908914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.909063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.909104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.909210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.909314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.909342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.419 qpair failed and we were unable to recover it. 00:29:41.419 [2024-04-26 15:10:26.909481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.419 [2024-04-26 15:10:26.909639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.909667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.909831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.909971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.909999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.910124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.910270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.910299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.910416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.910543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.910568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.910693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.910788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.910814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.910913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.911032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.911058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.911218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.911319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.911342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.911467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.911566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.911589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.911599] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:41.420 [2024-04-26 15:10:26.911737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.911895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.911919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.912088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.912221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.912245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.912397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.912562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.912599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.912700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.912857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.912881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.913047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.913203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.913227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.913370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.913494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.913517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.913690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.913843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.913866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.914041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.914145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.914169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.914316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.914441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.914465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.914603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.914729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.914753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.914883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.914977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.915000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.915111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.915211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.915235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.915349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.915457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.915480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.915700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.915901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.915924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.916053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.916152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.916176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.916355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.916471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.916495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.916646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.916794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.916817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.916963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.917125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.917150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.917259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.917409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.420 [2024-04-26 15:10:26.917432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.420 qpair failed and we were unable to recover it. 00:29:41.420 [2024-04-26 15:10:26.917576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.917667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.917695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.917849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.917974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.917998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.918154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.918277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.918316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.918522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.918716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.918740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.918843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.918971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.918995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.919143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.919277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.919303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.919534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.919652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.919675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.919841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.919992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.920016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.920161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.920319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.920344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.920487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.920623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.920647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.920823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.920948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.920972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.921126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.921228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.921252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.921391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.921509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.921533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.921640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.921777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.921801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.921977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.922095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.922120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.922247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.922437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.922464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.922629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.922731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.922755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.922891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.923006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.923062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.923174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.923273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.923298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.923411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.923558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.923582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.923721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.923821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.923845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.923991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.924151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.924175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.924288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.924403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.924427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.924603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.924698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.924723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.924875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.925004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.925034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.925186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.925319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.925347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.421 [2024-04-26 15:10:26.925493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.925590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.421 [2024-04-26 15:10:26.925614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.421 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.925792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.925886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.925910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.926056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.926178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.926203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.926366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.926504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.926528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.926663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.926814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.926838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.926980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.927127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.927153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.927319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.927484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.927507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.927638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.927765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.927788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.927930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.928060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.928085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.928253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.928410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.928436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.928580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.928707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.928730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.928875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.929036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.929059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.929198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.929337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.929360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.929517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.929641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.929665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.929833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.929983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.930007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.930128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.930254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.930277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.930456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.930558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.930582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.930710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.930811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.930834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.930987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.931160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.931199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.931355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.931452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.931478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.931655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.931811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.931835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.931959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.932093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.932118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.932258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.932360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.932384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.932534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.932635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.932658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.932777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.932874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.932898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.933046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.933187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.933210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.933320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.933434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.933457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.933602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.933732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.933755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.933905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.934032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.934056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.422 qpair failed and we were unable to recover it. 00:29:41.422 [2024-04-26 15:10:26.934228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.422 [2024-04-26 15:10:26.934359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.934383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.934557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.934660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.934683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.934834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.934982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.935025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.935192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.935337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.935360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.935503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.935587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.935611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.935768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.935918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.935955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.936072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.936196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.936220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.936340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.936499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.936522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.936623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.936772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.936796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.936910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.937046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.937071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.937225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.937346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.937370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.937513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.937604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.937627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.937742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.937840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.937863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.938009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.938181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.938219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.938343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.938501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.938538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.938699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.938797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.938820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.938958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.939120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.939144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.939261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.939366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.939389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.939546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.939647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.939670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.939810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.939925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.939949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.940071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.940169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.940193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.940347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.940495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.940519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.940635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.940729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.940753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.940923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.941059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.941084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.941211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.941364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.941402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.941529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.941648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.941672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.941801] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:41.423 [2024-04-26 15:10:26.941814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.941915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.941939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.942136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.942255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.942294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.942447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.942599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.423 [2024-04-26 15:10:26.942622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.423 qpair failed and we were unable to recover it. 00:29:41.423 [2024-04-26 15:10:26.942760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.942889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.942912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.943061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.943188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.943219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.943396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.943561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.943584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.943701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.943819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.943843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.943983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.944133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.944158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.944305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.944440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.944464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.944646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.944751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.944774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.944949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.945076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.945101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.945316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.945459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.945482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.945710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.945830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.945852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.946003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.946152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.946176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.946303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.946418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.946441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.946577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.946736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.946759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.946905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.947063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.947088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.947227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.947383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.947406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.947578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.947677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.947701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.947839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.948046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.948085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.948255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.948353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.948377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.948545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.948643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.948666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.948813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.948960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.948984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.949223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.949411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.949448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.949605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.949736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.949760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.949895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.950106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.950131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.950246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.950422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.950445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.950575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.950707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.950730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.950852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.951008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.951051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.951195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.951345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.951383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.951561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.951746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.951770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.424 qpair failed and we were unable to recover it. 00:29:41.424 [2024-04-26 15:10:26.951879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.424 [2024-04-26 15:10:26.951978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.952017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.952195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.952355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.952377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.952543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.952640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.952663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.952808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.952938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.952961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.953076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.953233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.953257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.953423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.953548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.953586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.953738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.953891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.953914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.954003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.954135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.954159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.954289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.954397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.954421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.954621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.954771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.954794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.954974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.955165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.955205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.955416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.955605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.955628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.955793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.955915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.955938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.956082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.956212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.956237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.956374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.956518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.956556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.956692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.956831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.956855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.956988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.957159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.957185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.957302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.957402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.957427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.957606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.957730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.957754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.957891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.958032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.958058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.958212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.958356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.958393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.958499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.958633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.958657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.958835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.958933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.958956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.959053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.959150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.959174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.959344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.959525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.959548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.425 [2024-04-26 15:10:26.959720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.959844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.425 [2024-04-26 15:10:26.959867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.425 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.960027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.960160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.960183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.960338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.960453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.960477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.960624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.960785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.960809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.961054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.961184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.961209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.961340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.961452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.961476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.961661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.961774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.961798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.961941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.962083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.962108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.962358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.962534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.962559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.962702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.962889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.962913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.963049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.963284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.963309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.963473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.963626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.963650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.963822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.963974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.964012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.964167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.964341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.964364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.964511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.964645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.964669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.964792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.964918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.964941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.965082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.965210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.965236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.965372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.965502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.965526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.965648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.965780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.965803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.965965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.966128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.966152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.966291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.966461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.966485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.966717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.966894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.966918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.967079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.967230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.967254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.967399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.967524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.967548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.967650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.967782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.967813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.967938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.968075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.968100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.968343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.968472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.968496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.968609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.968761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.968785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.968957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.969097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.969122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.426 [2024-04-26 15:10:26.969292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.969490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.426 [2024-04-26 15:10:26.969513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.426 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.969658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.969809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.969832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.969981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.970115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.970139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.970288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.970396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.970420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.970569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.970668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.970692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.970841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.970994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.971040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.971177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.971277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.971317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.971468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.971606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.971630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.971765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.971860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.971884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.972017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.972175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.972199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.972323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.972467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.972496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.972627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.972758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.972782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.972925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.973055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.973081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.973223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.973384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.973408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.973536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.973659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.973683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.973793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.973928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.973951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.974137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.974308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.974332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.974483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.974609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.974633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.974748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.974875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.974899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.975085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.975216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.975256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.975449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.975658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.975682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.975828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.975957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.975981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.976141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.976353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.976393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.976507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.976603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.976626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.976780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.976932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.976956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.977143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.977277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.977301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.977426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.977514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.977539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.977673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.977830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.977854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.978046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.978234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.978258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.427 qpair failed and we were unable to recover it. 00:29:41.427 [2024-04-26 15:10:26.978411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.978512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.427 [2024-04-26 15:10:26.978535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.978702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.978805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.978829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.978951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.979055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.979081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.979220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.979379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.979415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.979529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.979691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.979715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.979872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.980026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.980052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.980177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.980278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.980310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.980504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.980601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.980624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.980767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.980861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.980885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.981029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.981147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.981170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.981382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.981522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.981544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.981651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.981757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.981781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.981933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.982127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.982152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.982313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.982514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.982537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.982709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.982847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.982885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.982993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.983128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.983152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.983274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.983419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.983442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.983625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.983787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.983810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.983967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.984092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.984117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.984268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.984431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.984469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.984633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.984734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.984758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.984905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.985045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.985073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.985242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.985433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.985469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.985620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.985739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.985764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.985878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.986008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.986038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.986207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.986313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.986337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.986466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.986593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.986617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.986787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.986916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.986940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.987093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.987248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.987273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.987442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.987535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.987558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.428 [2024-04-26 15:10:26.987731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.987887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.428 [2024-04-26 15:10:26.987925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.428 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.988175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.988338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.988380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.988519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.988675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.988698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.988846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.989070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.989095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.989320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.989493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.989516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.989634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.989752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.989775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.989912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.990040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.990064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.990227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.990362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.990385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.990534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.990632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.990656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.990824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.990960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.990985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.991162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.991336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.991361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.991534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.991686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.991717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.991869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.991965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.992004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.992134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.992257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.992283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.992384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.992501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.992525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.992651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.992781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.992804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.992946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.993106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.993133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.993268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.993440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.993479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.993615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.993739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.993763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.993934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.994063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.994104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.994266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.994391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.994415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.994556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.994675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.994700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.994817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.994906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.994930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.995107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.995278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.995304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.995440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.995564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.995587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.995739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.995862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.995886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.995980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.996103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.996129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.996235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.996706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.996735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.996887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.997046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.997073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.429 qpair failed and we were unable to recover it. 00:29:41.429 [2024-04-26 15:10:26.997205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.429 [2024-04-26 15:10:26.997345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:26.997386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:26.997524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:26.997649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:26.997674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:26.997796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:26.997900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:26.997925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:26.998085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:26.998242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:26.998268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:26.998380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:26.998532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:26.998557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:26.998714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:26.998868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:26.998892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:26.999008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:26.999184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:26.999210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:26.999376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:26.999499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:26.999538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:26.999638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:26.999795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:26.999836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:26.999968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.000120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.000147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.000256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.000363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.000403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.000503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.000630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.000654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.000798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.000964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.001003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.001154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.001316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.001342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.001484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.001638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.001662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.001790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.001893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.001916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.002038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.002141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.002167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.002292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.002435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.002458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.002585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.002674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.002698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.002812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.002964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.002988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.003117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.003226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.003252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.003380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.003485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.003508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.003620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.003750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.003774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.003942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.004078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.004104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.004268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.004405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.004444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.004563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.004716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.004740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.004884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.004986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.005044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.005208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.005330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.005355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.005603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.005790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.005814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.005953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.006059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.006086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.430 qpair failed and we were unable to recover it. 00:29:41.430 [2024-04-26 15:10:27.006187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.430 [2024-04-26 15:10:27.006300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.006340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.006468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.006595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.006619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.006775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.006876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.006900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.007048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.007152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.007178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.007284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.007402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.007426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.007605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.007755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.007793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.007927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.008061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.008088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.008247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.008375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.008412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.008580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.008742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.008766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.008914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.009051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.009078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.009184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.009337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.009361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.009457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.009576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.009600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.009727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.009821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.009844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.010003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.010133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.010159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.010286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.010458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.010483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.010723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.010838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.010876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.011034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.011165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.011192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.011332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.011489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.011513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.011630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.011758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.011798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.011958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.012129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.012155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.012254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.012424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.012447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.012575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.012682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.012706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.012883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.012982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.013027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.013140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.013297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.013338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.013479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.013633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.013657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.013798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.013922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.013946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.014135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.014237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.014264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.014400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.014526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.014550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.014646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.014773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.014797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.014930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.015065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.015093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.015231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.015395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.015419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.015559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.015712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.015736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.015851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.015979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.016023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.431 [2024-04-26 15:10:27.016134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.016244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.431 [2024-04-26 15:10:27.016270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.431 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.016423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.016578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.016616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.016757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.016886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.016910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.017060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.017192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.017218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.017330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.017431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.017454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.017629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.017768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.017806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.017944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.018075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.018101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.018205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.018317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.018341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.018458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.018587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.018612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.018784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.018911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.018936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.019092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.019228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.019255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.019418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.019543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.019567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.019680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.019801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.019825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.019998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.020162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.020188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.020317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.020446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.020470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.020652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.020777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.020816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.020938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.021077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.021103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.021214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.021354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.021378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.021525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.021683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.021707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.021850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.021957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.021981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.022165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.022328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.022355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.022505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.022644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.022669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.022827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.022934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.022959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.023081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.023214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.023239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.023389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.023517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.023541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.023688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.023789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.023814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.023956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.024097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.024124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.024286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.024407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.024431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.024560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.024712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.024736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.024882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.024976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.025015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.025134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.025275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.025301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.025449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.025558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.025582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.025723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.025851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.025875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.432 qpair failed and we were unable to recover it. 00:29:41.432 [2024-04-26 15:10:27.026048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.432 [2024-04-26 15:10:27.026175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.026202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.026348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.026457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.026482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.026662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.026816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.026841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.027028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.027145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.027172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.027321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.027447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.027471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.027591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.027715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.027739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.027885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.028033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.028060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.028221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.028336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.028360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.028514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.028613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.028637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.028803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.028905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.028929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.029088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.029251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.029277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.029419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.029544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.029567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.029711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.029830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.029854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.029993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.030173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.030201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.030338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.030428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.030452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.030622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.030727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.030751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.030848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.030962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.030985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.031173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.031300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.031343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.031474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.031629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.031653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.031824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.031945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.031969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.032155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.032283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.032324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.032456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.032577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.032600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.032773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.032891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.032914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.033046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.033196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.033222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.033350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.033460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.033484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.033657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.033758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.033782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.033880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.034025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.034051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.034164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.034298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.034340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.034516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.034616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.034639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.034822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.034922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.034945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.035086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.035222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.035248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.433 qpair failed and we were unable to recover it. 00:29:41.433 [2024-04-26 15:10:27.035396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.035523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.433 [2024-04-26 15:10:27.035547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.035685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.035784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.035808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.035959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.036095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.036121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.036259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.036386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.036411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.036555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.036650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.036675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.036789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.036926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.036952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.037084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.037191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.037222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.037295] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.434 [2024-04-26 15:10:27.037332] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.434 [2024-04-26 15:10:27.037348] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.434 [2024-04-26 15:10:27.037356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.037361] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.434 [2024-04-26 15:10:27.037376] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.434 [2024-04-26 15:10:27.037458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:41.434 [2024-04-26 15:10:27.037517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.037542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.037513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:41.434 [2024-04-26 15:10:27.037561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:41.434 [2024-04-26 15:10:27.037564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:41.434 [2024-04-26 15:10:27.037677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.037786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.037811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.037918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.038051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.038077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.038242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.038351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.038377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.038485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.038617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.038644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.038780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.038912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.038939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.039041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.039177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.039204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.039335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.039474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.039499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.039628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.039730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.039756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.039895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.039997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.040027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.040160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.040295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.040320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.040431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.040561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.040587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.040693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.040796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.040821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.040952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.041086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.041112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.041238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.041347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.041373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.041504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.041636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.041662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.041803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.041931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.041956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.042082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.042213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.042240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.042379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.042507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.042533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.042668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.042825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.042851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.043005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.043115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.043141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.043278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.043407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.043432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.434 qpair failed and we were unable to recover it. 00:29:41.434 [2024-04-26 15:10:27.043562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.434 [2024-04-26 15:10:27.043694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.043719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.043855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.043965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.043991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.044093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.044206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.044233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.044367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.044501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.044527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.044660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.044777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.044802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.044911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.045049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.045077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.045246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.045348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.045373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.045500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.045657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.045684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.045818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.045947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.045974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.046104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.046209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.046235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.046340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.046436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.046462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.046623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.046730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.046756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.046890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.046997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.047027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.047146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.047278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.047304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.047413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.047546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.047571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.047704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.047829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.047854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.047967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.048098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.048124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.048231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.048359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.048385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.048517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.048650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.048675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.048810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.048938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.048964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.049066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.049202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.049228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.049345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.049475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.049501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.049631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.049745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.049771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.049868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.049996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.050026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.050193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.050323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.050349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.050454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.050581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.050611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.050770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.050875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.050902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.051046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.051200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.051226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.051326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.051457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.051482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.051584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.051716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.051742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.051871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.051970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.051995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.052162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.052280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.052306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.052406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.052542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.052567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.052673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.052775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.435 [2024-04-26 15:10:27.052800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.435 qpair failed and we were unable to recover it. 00:29:41.435 [2024-04-26 15:10:27.052907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.053044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.053070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.053175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.053300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.053330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.053484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.053608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.053633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.053759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.053863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.053888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.054033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.054141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.054166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.054296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.054400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.054425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.054561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.054697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.054722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.054852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.054957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.054984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.055118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.055245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.055271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.055384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.055515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.055542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.055685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.055845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.055872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.055973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.056135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.056165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.056265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.056403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.056429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.056533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.056661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.056687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.056815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.056916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.056941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.057076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.057210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.057235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.057365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.057469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.057494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.057652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.057784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.057811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.057916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.058045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.058071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.058210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.058374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.058400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.058524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.058625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.058652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.058778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.058916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.058951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.059086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.059217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.059243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.059374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.059505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.059530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.059662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.059791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.059818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.059947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.060076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.060103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.060259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.060360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.060385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.060538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.060677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.060703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.060833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.060958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.060984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.061138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.061247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.061272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.061402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.061502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.061527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.061653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.061764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.061790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.061898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.062069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.062096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.436 qpair failed and we were unable to recover it. 00:29:41.436 [2024-04-26 15:10:27.062258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.062403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.436 [2024-04-26 15:10:27.062428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.062558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.062657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.062684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.062793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.062918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.062943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.063077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.063184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.063211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.063371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.063505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.063532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.063637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.063738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.063764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.063871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.063987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.064014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.064201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.064359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.064385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.064492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.064612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.064637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.064773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.064875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.064901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.065054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.065161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.065186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.065285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.065422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.065447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.065578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.065709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.065735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.065896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.066012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.066045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.066149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.066280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.066307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.066411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.066521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.066547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.066709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.066835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.066862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.066965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.067091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.067119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.067265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.067423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.067450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.067592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.067692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.067718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.067880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.067994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.068026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.068130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.068285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.068310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.068457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.068588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.068613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.068734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.068861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.068887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.068985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.069125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.069151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.069263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.069364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.069389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.069522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.069650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.069675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.069784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.069912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.069938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.070070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.070203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.070228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.437 qpair failed and we were unable to recover it. 00:29:41.437 [2024-04-26 15:10:27.070365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.437 [2024-04-26 15:10:27.070492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.070518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.070651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.070743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.070769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.070899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.071061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.071089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.071214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.071354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.071380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.071477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.071620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.071646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.071789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.071916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.071942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.072061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.072193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.072218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.072328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.072464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.072489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.072626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.072760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.072786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.072917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.073017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.073049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.073183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.073286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.073313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.073475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.073633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.073659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.073767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.073875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.073901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.074015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.074152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.074177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.074288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.074446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.074471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.074625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.074757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.074783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.074886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.075007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.075051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.075169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.075290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.075328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.075470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.075644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.075685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.075825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.075954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.075979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.076185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.076346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.076371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.076550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.076699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.076726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.076881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.076982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.077007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.077171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.077270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.077297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.077446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.077644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.077670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.438 qpair failed and we were unable to recover it. 00:29:41.438 [2024-04-26 15:10:27.077853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.438 [2024-04-26 15:10:27.078007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.078052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.078159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.078337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.078362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.078510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.078636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.078662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.078780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.078882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.078908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.079057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.079186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.079212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.079338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.079517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.079543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.079647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.079771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.079797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.079969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.080174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.080200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.080344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.080501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.080527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.080635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.080777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.080803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.080934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.081170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.081197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.081305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.081462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.081487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.081643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.081782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.081808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.081983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.082200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.082228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.082386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.082518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.082543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.082714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.082833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.082860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.083044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.083223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.083250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.083395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.083538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.083563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.083734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.083875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.083901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.084025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.084174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.084200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.084353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.084458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.084484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.084662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.084854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.084879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.085071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.085216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.085242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.085424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.085597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.085623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.085809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.085938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.085964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.086089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.086226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.086251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.086419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.086607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.086633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.086872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.087069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.087097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.439 [2024-04-26 15:10:27.087261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.087402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.439 [2024-04-26 15:10:27.087429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.439 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.087534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.087679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.087705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.087814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.087979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.088013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.088174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.088325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.088350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.088476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.088699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.088726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.088915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.089055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.089081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.089188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.089322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.089348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.089516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.089656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.089687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.089824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.089940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.089966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.090188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.090291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.090316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.090486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.090617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.090643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.090769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.090969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.090995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb84000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.091135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.091287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.091316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.091487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.091589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.091616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.091774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.091960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.091989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.092115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.092303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.092329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.092479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.092633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.092659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.092809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.092956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.092983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.093115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.093229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.093256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.093367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.093513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.093540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.093641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.093747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.093783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.093936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.094049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.094077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.094187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.094336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.094362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.094491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.094622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.094648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.094791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.094948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.094974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.095155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.095288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.095316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.095499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.095613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.095640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.095821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.095925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.095956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.096108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.096256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.096286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.096418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.096527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.096554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.096708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.096827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.096853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.097037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.097183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.097210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.097348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.097488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.097515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.097631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.097780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.097809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.097953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.098095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.098122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.098239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.098379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.098404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.440 qpair failed and we were unable to recover it. 00:29:41.440 [2024-04-26 15:10:27.098546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.440 [2024-04-26 15:10:27.098733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.098759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.441 qpair failed and we were unable to recover it. 00:29:41.441 [2024-04-26 15:10:27.098862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.099040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.099074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.441 qpair failed and we were unable to recover it. 00:29:41.441 [2024-04-26 15:10:27.099187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.099368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.099394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.441 qpair failed and we were unable to recover it. 00:29:41.441 [2024-04-26 15:10:27.099513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.099657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.099683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.441 qpair failed and we were unable to recover it. 00:29:41.441 [2024-04-26 15:10:27.099821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.099960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.099986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.441 qpair failed and we were unable to recover it. 00:29:41.441 [2024-04-26 15:10:27.100105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.100266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.100294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.441 qpair failed and we were unable to recover it. 00:29:41.441 [2024-04-26 15:10:27.100456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.100582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.100609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.441 qpair failed and we were unable to recover it. 00:29:41.441 [2024-04-26 15:10:27.100786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.100922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.100949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.441 qpair failed and we were unable to recover it. 00:29:41.441 [2024-04-26 15:10:27.101075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.101196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.101222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.441 qpair failed and we were unable to recover it. 00:29:41.441 [2024-04-26 15:10:27.101350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.101540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.101567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.441 qpair failed and we were unable to recover it. 00:29:41.441 [2024-04-26 15:10:27.101689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.101824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.101850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.441 qpair failed and we were unable to recover it. 00:29:41.441 [2024-04-26 15:10:27.101968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.102114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.102146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.441 qpair failed and we were unable to recover it. 00:29:41.441 [2024-04-26 15:10:27.102305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.441 [2024-04-26 15:10:27.102414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.102440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.102577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.102751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.102778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.102901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.103077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.103105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.103294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.103450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.103476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.103576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.103687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.103714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.103851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.103968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.103994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.104193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.104328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.104354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.104469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.104648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.104675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.104852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.105029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.105057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.105191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.105298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.105328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.105479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.105605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.105631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.105753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.105925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.105952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.106129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.106292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.106320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.106521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.106737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.106764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.106868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.107048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.107076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.107279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.107450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.107477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.107622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.107840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.107867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.108071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.108234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.108261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.108433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.108642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.108668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.108853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.108982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.109031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.109267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.109415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.109441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.109668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.109907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.109933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.110110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.110284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.110311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.110541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.110674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.110701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.110883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.111041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.111070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.111241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.111414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.111441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.705 qpair failed and we were unable to recover it. 00:29:41.705 [2024-04-26 15:10:27.111650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.705 [2024-04-26 15:10:27.111804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.111830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.112028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.112156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.112183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.112349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.112480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.112508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.112718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.112883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.112909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.113084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.113234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.113261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.113473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.113616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.113651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.113775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.113898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.113925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.114140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.114318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.114345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.114556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.114738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.114764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.114996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.115192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.115220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.115382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.115589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.115614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.115810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.116046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.116073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.116254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.116380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.116409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.116564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.116708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.116734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.116943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.117118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.117145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.117368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.117548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.117575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.117708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.117846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.117875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.118079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.118248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.118276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.118454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.118675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.118717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.118920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.119093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.119123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.119272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.119444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.119471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.119638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.119777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.119818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.120004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.120163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.120191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.120379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.120548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.120573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.120781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.120957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.120983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.121218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.121391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.121418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.121599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.121738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.121781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.122030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.122185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.122211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.706 [2024-04-26 15:10:27.122419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.122592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.706 [2024-04-26 15:10:27.122619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.706 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.122762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.122972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.122997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb7c000b90 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.123245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.123481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.123510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.123715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.123930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.123956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.124148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.124285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.124327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.124514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.124687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.124713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.124936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.125124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.125151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.125326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.125509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.125534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.125750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.125943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.125968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.126123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.126256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.126282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.126495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.126752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.126777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.127032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.127184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.127210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.127379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.127573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.127598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.127745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.127881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.127906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.128129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.128285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.128310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.128427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.128600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.128625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.128864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.129025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.129052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.129234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.129412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.129436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.129575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.129798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.129822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.130051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.130231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.130257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.130413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.130638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.130662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.130888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.131046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.131088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.131316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.131452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.131476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.131655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.131801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.131827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.132027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.132246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.132273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.132470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.132657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.132681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.132907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.133093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.133121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.133290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.133483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.133507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.133749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.133927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.133952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.134144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.134371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.707 [2024-04-26 15:10:27.134396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.707 qpair failed and we were unable to recover it. 00:29:41.707 [2024-04-26 15:10:27.134555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.134682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.134706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.134893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.135112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.135139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.135320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.135529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.135554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.135745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.135909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.135934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.136125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.136302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.136328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.136511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.136737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.136761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.136952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.137176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.137207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.137437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.137608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.137633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.137831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.137966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.137992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.138195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.138380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.138419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.138649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.138899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.138925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.139158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.139359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.139384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.139650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.139849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.139874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.140105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.140315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.140340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.140506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.140729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.140754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.140963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.141151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.141178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.141405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.141602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.141627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.141744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.141878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.141903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.142010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.142217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.142243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.142413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.142594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.142619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.142883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.143088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.143115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.143297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.143431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.143473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.143710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.143866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.143891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.144093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.144249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.144275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.144453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.144670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.144695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.144888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.145014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.145048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.145218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.145409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.145435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.145637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.145860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.145885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.146115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.146364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.146390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.708 qpair failed and we were unable to recover it. 00:29:41.708 [2024-04-26 15:10:27.146581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.708 [2024-04-26 15:10:27.146818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.146844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.147077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.147259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.147286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.147472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.147633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.147658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.147851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.148090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.148117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.148340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.148534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.148560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.148805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.148975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.149001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.149246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.149426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.149451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.149606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.149785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.149826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.150033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.150266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.150293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.150444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.150610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.150636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.150817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.151055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.151084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.151312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.151502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.151529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.151711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.151910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.151936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.152125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.152302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.152329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.152555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.152781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.152808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.152991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.153194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.153221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.153432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.153627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.153653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.153823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.153968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.153999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.154196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.154333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.154363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.154545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.154729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.154755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.154928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.155109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.155138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.155317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.155512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.155539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.155763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.155965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.155992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.156154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.156336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.156363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.709 qpair failed and we were unable to recover it. 00:29:41.709 [2024-04-26 15:10:27.156555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.156775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.709 [2024-04-26 15:10:27.156802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.156997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.157179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.157205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.157382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.157598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.157624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.157848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.158087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.158114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.158325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.158561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.158587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.158802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.158929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.158956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.159135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.159305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.159332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.159471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.159654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.159681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.159906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.160126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.160153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.160386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.160602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.160628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.160764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.160940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.160967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.161199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.161392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.161419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.161592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.161776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.161802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.162053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.162235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.162261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.162507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.162727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.162753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.162941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.163092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.163120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.163314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.163541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.163567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.163716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.163910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.163937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.164169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.164393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.164419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.164646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.164794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.164819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.165047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.165268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.165294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.165460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.165707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.165733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.165908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.166103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.166130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.166317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.166550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.166576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.166783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.167012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.167052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.167297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.167493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.167519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.167701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 15:10:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:41.710 [2024-04-26 15:10:27.167908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.167937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 [2024-04-26 15:10:27.168134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 15:10:27 -- common/autotest_common.sh@850 -- # return 0 00:29:41.710 [2024-04-26 15:10:27.168329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.168356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 15:10:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:41.710 [2024-04-26 15:10:27.168617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 15:10:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:41.710 [2024-04-26 15:10:27.168810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.710 [2024-04-26 15:10:27.168837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.710 qpair failed and we were unable to recover it. 00:29:41.710 15:10:27 -- common/autotest_common.sh@10 -- # set +x 00:29:41.711 [2024-04-26 15:10:27.169056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.169266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.169292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.169539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.169774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.169800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.170029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.170234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.170261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.170511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.170683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.170708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.170927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.171120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.171148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.171253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.171398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.171428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.171584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.171731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.171757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.171890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.172005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.172048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.172160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.172284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.172311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.172467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.172655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.172680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.172833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.173043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.173075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.173201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.173343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.173369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.173492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.173715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.173741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.173994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.174153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.174180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.174295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.174429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.174455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.174628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.174811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.174837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.175035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.175153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.175179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.175300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.175414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.175440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.175582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.175735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.175761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.176006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.176152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.176178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.176292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.176406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.176432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.176580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.176724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.176749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.176912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.177105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.177133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.177252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.177396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.177422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.177556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.177696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.177722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.177877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.177988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.178014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.178172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.178324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.178349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.178495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.178647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.178674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.711 qpair failed and we were unable to recover it. 00:29:41.711 [2024-04-26 15:10:27.178823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.178975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.711 [2024-04-26 15:10:27.179001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.179154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.179302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.179329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.179537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.179681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.179707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.179900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.180002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.180039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.180177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.180335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.180361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.180517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.180713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.180741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.180931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.181096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.181123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.181238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.181445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.181485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.181672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.181873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.181898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.182079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.182232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.182258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.182423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.182569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.182595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.182701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.182841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.182866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.182999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.183167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.183194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.183297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.183434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.183459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.183608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.183718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.183745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.183888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.184046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.184102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.184244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.184358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.184384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.184550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.184689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.184714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.184903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.185033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.185076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.185203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.185347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.185373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.185553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.185710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.185736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.185896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.186035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.186071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.186207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.186337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.186363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.186514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.186666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.186706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.186858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.187015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.187053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.187221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.187360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.187385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.187541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.187663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.187689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.187881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.188011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.188061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.188187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.188332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.188358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.712 [2024-04-26 15:10:27.188477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.188592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.712 [2024-04-26 15:10:27.188618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.712 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.188765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.188974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.188999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.189166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.189281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.189308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.189450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.189651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.189676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.189829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.189955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.189981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.190135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.190249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.190275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.190419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.190546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.190572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.190731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.190908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.190934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.191089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.191232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.191258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.191413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.191554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.191595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.191793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.191948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.191973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.192084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.192208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.192233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.192375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.192559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.192584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.192795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.193053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.193088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.193193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 15:10:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.713 [2024-04-26 15:10:27.193312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.193338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 15:10:27 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:41.713 [2024-04-26 15:10:27.193476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 15:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:41.713 [2024-04-26 15:10:27.193640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.193692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 15:10:27 -- common/autotest_common.sh@10 -- # set +x 00:29:41.713 [2024-04-26 15:10:27.193945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.194145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.194171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.194319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.194440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.194490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.194699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.194881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.194918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.195101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.195230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.195256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.195400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.195552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.195578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.195769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.195891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.195917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.196032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.196149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.196175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.196315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.196542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.196578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.196773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.196940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.196964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.197106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.197243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.197269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.197438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.197625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.197649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.197912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.198084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.198110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.713 [2024-04-26 15:10:27.198223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.198349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.713 [2024-04-26 15:10:27.198374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.713 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.198477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.198618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.198647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.198787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.198895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.198921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.199097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.199243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.199269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.199426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.199595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.199635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.199837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.199970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.200009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.200138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.200258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.200284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.200392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.200586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.200626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.200814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.200948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.200973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.201115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.201259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.201285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.201427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.201641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.201666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.201817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.201965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.201990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.202108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.202213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.202239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.202455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.202652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.202687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.202863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.202975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.203000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.203141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.203309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.203336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.203491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.203635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.203661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.203931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.204184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.204211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.204347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.204492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.204518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.204725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.204900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.204940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.205092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.205207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.205233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.205385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.205554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.205598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.205759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.205922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.205963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.206094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.206218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.206243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.206390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.206514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.206549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.206761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.206929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.206969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.714 qpair failed and we were unable to recover it. 00:29:41.714 [2024-04-26 15:10:27.207101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.714 [2024-04-26 15:10:27.207251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.207277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.207400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.207562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.207588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.207744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.207886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.207926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.208144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.208291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.208338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.208458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.208610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.208636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.208774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.208915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.208941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.209117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.209233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.209259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.209410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.209548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.209574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.209823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.209984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.210009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.210177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.210310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.210335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.210522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.210628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.210654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.210791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.210932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.210958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.211133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.211240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.211266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.211434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.211549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.211574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.211728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.211904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.211930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.212081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.212187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.212213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.212360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.212500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.212526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.212692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.212839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.212865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.212967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.213103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.213129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.213234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.213388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.213413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.213583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.213770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.213794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.213979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.214147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.214173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.214295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.214447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.214473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.214666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.214833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.214859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.214996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.215127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.215155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.215255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.215425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.215451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.215620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.215770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.215814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.215997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.216142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.216169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.216282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.216392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.216417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.715 qpair failed and we were unable to recover it. 00:29:41.715 [2024-04-26 15:10:27.216587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.715 [2024-04-26 15:10:27.216695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.216720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.216908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.217057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.217084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.217197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.217339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.217365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.217481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.217657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.217683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.217838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.218009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.218041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.218160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.218338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.218364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.218544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.218710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.218737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.218882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.219031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.219062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 Malloc0 00:29:41.716 [2024-04-26 15:10:27.219201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.219341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.219366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.219560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 15:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:41.716 15:10:27 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:41.716 [2024-04-26 15:10:27.219751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.219777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 15:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:41.716 [2024-04-26 15:10:27.219931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 15:10:27 -- common/autotest_common.sh@10 -- # set +x 00:29:41.716 [2024-04-26 15:10:27.220136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.220162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.220334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.220504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.220530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.220636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.220815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.220841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.220957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.221112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.221139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.221323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.221519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.221544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.221716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.221894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.221919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.222119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.222240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.222267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.222384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.222491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.222516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.222680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.222804] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.716 [2024-04-26 15:10:27.222835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.222879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.223093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.223216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.223243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.223382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.223487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.223512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.223666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.223883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.223908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.224054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.224205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.224231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.224379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.224512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.224537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.224731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.224843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.224869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.225011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.225151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.225176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.225347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.225508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.225533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.225658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.225838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.225864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.716 qpair failed and we were unable to recover it. 00:29:41.716 [2024-04-26 15:10:27.226033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.226190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.716 [2024-04-26 15:10:27.226216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.226335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.226479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.226504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.226616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.226855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.226879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.227046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.227182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.227208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.227394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.227533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.227574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.227823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.228042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.228069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.228262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.228411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.228436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.228621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.228797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.228822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.228990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.229178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.229205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.229360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.229469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.229499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.229675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.229827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.229853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.230091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.230288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.230313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.230435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.230608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.230633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.230826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.231009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 15:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:41.717 [2024-04-26 15:10:27.231050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 15:10:27 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:41.717 [2024-04-26 15:10:27.231233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 15:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:41.717 15:10:27 -- common/autotest_common.sh@10 -- # set +x 00:29:41.717 [2024-04-26 15:10:27.231405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.231431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.231598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.231738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.231763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.231872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.232007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.232043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.232223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.232445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.232471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.232591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.232760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.232786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.232978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.233129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.233155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.233310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.233505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.233530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.233685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.233896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.233921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.234081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.234278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.234304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.234488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.234661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.234686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.234892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.235068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.235096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.235269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.235420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.235445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.235666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.235817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.235840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.236060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.236201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.236227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.717 qpair failed and we were unable to recover it. 00:29:41.717 [2024-04-26 15:10:27.236366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.236577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.717 [2024-04-26 15:10:27.236602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.236841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.237055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.237081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.237228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.237430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.237456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.237695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.237838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.237863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.238013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.238208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.238234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.238414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.238571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.238595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.238749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.238898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.238925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 15:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:41.718 [2024-04-26 15:10:27.239097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 15:10:27 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:41.718 [2024-04-26 15:10:27.239271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.239298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 15:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 15:10:27 -- common/autotest_common.sh@10 -- # set +x 00:29:41.718 [2024-04-26 15:10:27.239468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.239635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.239661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.239830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.240049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.240076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.240270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.240400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.240425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.240572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.240741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.240781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.241010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.241200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.241226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.241370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.241479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.241505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.241644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.241849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.241874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.242012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.242125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.242151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.242328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.242473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.242514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.242687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.242821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.242847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.243002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.243191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.243218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.243352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.243494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.243520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.243706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.243808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.243834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.244045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.244246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.244271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.244410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.244633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.244658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.244900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.245047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.245074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.245246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.245400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.718 [2024-04-26 15:10:27.245425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.718 qpair failed and we were unable to recover it. 00:29:41.718 [2024-04-26 15:10:27.245568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.245737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.245777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 [2024-04-26 15:10:27.246012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.246187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.246214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 [2024-04-26 15:10:27.246378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.246522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.246562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 [2024-04-26 15:10:27.246683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.246853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.246878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 [2024-04-26 15:10:27.247062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 15:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:41.719 [2024-04-26 15:10:27.247215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 15:10:27 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:41.719 [2024-04-26 15:10:27.247241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 15:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:41.719 [2024-04-26 15:10:27.247394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 15:10:27 -- common/autotest_common.sh@10 -- # set +x 00:29:41.719 [2024-04-26 15:10:27.247545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.247571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 [2024-04-26 15:10:27.247752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.247924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.247949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 [2024-04-26 15:10:27.248053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.248259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.248285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 [2024-04-26 15:10:27.248461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.248601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.248627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 [2024-04-26 15:10:27.248797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.248977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.249002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 [2024-04-26 15:10:27.249232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.249389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.249414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 [2024-04-26 15:10:27.249576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.249803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.249828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 [2024-04-26 15:10:27.250042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.250236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.250262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 [2024-04-26 15:10:27.250489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.250669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.250694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a45b0 with addr=10.0.0.2, port=4420 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 [2024-04-26 15:10:27.250912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-04-26 15:10:27.251064] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:41.719 [2024-04-26 15:10:27.254031] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:29:41.719 [2024-04-26 15:10:27.254104] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a45b0 (107): Transport endpoint is not connected 00:29:41.719 [2024-04-26 15:10:27.254185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 15:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:41.719 15:10:27 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:41.719 15:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:41.719 15:10:27 -- common/autotest_common.sh@10 -- # set +x 00:29:41.719 15:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:41.719 15:10:27 -- host/target_disconnect.sh@58 -- # wait 3906994 00:29:41.719 [2024-04-26 15:10:27.263525] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.719 [2024-04-26 15:10:27.263668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.719 [2024-04-26 15:10:27.263697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.719 [2024-04-26 15:10:27.263713] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.719 [2024-04-26 15:10:27.263726] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.719 [2024-04-26 15:10:27.263756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 [2024-04-26 15:10:27.273442] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.719 [2024-04-26 15:10:27.273548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.719 [2024-04-26 15:10:27.273575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.719 [2024-04-26 15:10:27.273590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.719 [2024-04-26 15:10:27.273602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.719 [2024-04-26 15:10:27.273631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 [2024-04-26 15:10:27.283380] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.719 [2024-04-26 15:10:27.283491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.719 [2024-04-26 15:10:27.283517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.719 [2024-04-26 15:10:27.283532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.719 [2024-04-26 15:10:27.283545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.719 [2024-04-26 15:10:27.283573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 [2024-04-26 15:10:27.293422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.719 [2024-04-26 15:10:27.293539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.719 [2024-04-26 15:10:27.293563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.719 [2024-04-26 15:10:27.293578] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.719 [2024-04-26 15:10:27.293591] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.719 [2024-04-26 15:10:27.293619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 [2024-04-26 15:10:27.303443] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.719 [2024-04-26 15:10:27.303550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.719 [2024-04-26 15:10:27.303577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.719 [2024-04-26 15:10:27.303593] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.719 [2024-04-26 15:10:27.303606] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.719 [2024-04-26 15:10:27.303634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.719 qpair failed and we were unable to recover it. 00:29:41.719 [2024-04-26 15:10:27.313443] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.719 [2024-04-26 15:10:27.313552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.719 [2024-04-26 15:10:27.313579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.720 [2024-04-26 15:10:27.313594] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.720 [2024-04-26 15:10:27.313606] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.720 [2024-04-26 15:10:27.313636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.720 qpair failed and we were unable to recover it. 00:29:41.720 [2024-04-26 15:10:27.323553] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.720 [2024-04-26 15:10:27.323683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.720 [2024-04-26 15:10:27.323710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.720 [2024-04-26 15:10:27.323726] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.720 [2024-04-26 15:10:27.323738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.720 [2024-04-26 15:10:27.323768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.720 qpair failed and we were unable to recover it. 00:29:41.720 [2024-04-26 15:10:27.333622] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.720 [2024-04-26 15:10:27.333738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.720 [2024-04-26 15:10:27.333764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.720 [2024-04-26 15:10:27.333779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.720 [2024-04-26 15:10:27.333792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.720 [2024-04-26 15:10:27.333820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.720 qpair failed and we were unable to recover it. 00:29:41.720 [2024-04-26 15:10:27.343499] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.720 [2024-04-26 15:10:27.343602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.720 [2024-04-26 15:10:27.343633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.720 [2024-04-26 15:10:27.343649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.720 [2024-04-26 15:10:27.343661] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.720 [2024-04-26 15:10:27.343690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.720 qpair failed and we were unable to recover it. 00:29:41.720 [2024-04-26 15:10:27.353553] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.720 [2024-04-26 15:10:27.353651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.720 [2024-04-26 15:10:27.353677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.720 [2024-04-26 15:10:27.353691] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.720 [2024-04-26 15:10:27.353704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.720 [2024-04-26 15:10:27.353732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.720 qpair failed and we were unable to recover it. 00:29:41.720 [2024-04-26 15:10:27.363581] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.720 [2024-04-26 15:10:27.363689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.720 [2024-04-26 15:10:27.363714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.720 [2024-04-26 15:10:27.363729] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.720 [2024-04-26 15:10:27.363742] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.720 [2024-04-26 15:10:27.363770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.720 qpair failed and we were unable to recover it. 00:29:41.720 [2024-04-26 15:10:27.373635] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.720 [2024-04-26 15:10:27.373742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.720 [2024-04-26 15:10:27.373768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.720 [2024-04-26 15:10:27.373783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.720 [2024-04-26 15:10:27.373795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.720 [2024-04-26 15:10:27.373823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.720 qpair failed and we were unable to recover it. 00:29:41.720 [2024-04-26 15:10:27.383736] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.720 [2024-04-26 15:10:27.383886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.720 [2024-04-26 15:10:27.383912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.720 [2024-04-26 15:10:27.383927] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.720 [2024-04-26 15:10:27.383939] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.720 [2024-04-26 15:10:27.383967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.720 qpair failed and we were unable to recover it. 00:29:41.720 [2024-04-26 15:10:27.393741] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.720 [2024-04-26 15:10:27.393841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.720 [2024-04-26 15:10:27.393865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.720 [2024-04-26 15:10:27.393879] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.720 [2024-04-26 15:10:27.393892] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.720 [2024-04-26 15:10:27.393920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.720 qpair failed and we were unable to recover it. 00:29:41.720 [2024-04-26 15:10:27.403736] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.720 [2024-04-26 15:10:27.403844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.720 [2024-04-26 15:10:27.403869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.720 [2024-04-26 15:10:27.403883] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.720 [2024-04-26 15:10:27.403896] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.720 [2024-04-26 15:10:27.403924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.720 qpair failed and we were unable to recover it. 00:29:41.720 [2024-04-26 15:10:27.413780] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.720 [2024-04-26 15:10:27.413882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.720 [2024-04-26 15:10:27.413906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.720 [2024-04-26 15:10:27.413920] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.720 [2024-04-26 15:10:27.413932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.720 [2024-04-26 15:10:27.413960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.720 qpair failed and we were unable to recover it. 00:29:41.720 [2024-04-26 15:10:27.423859] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.720 [2024-04-26 15:10:27.423959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.720 [2024-04-26 15:10:27.423983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.720 [2024-04-26 15:10:27.424013] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.720 [2024-04-26 15:10:27.424038] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.720 [2024-04-26 15:10:27.424079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.720 qpair failed and we were unable to recover it. 00:29:41.720 [2024-04-26 15:10:27.433836] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.720 [2024-04-26 15:10:27.433943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.720 [2024-04-26 15:10:27.433974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.720 [2024-04-26 15:10:27.433989] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.720 [2024-04-26 15:10:27.434017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.720 [2024-04-26 15:10:27.434064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.720 qpair failed and we were unable to recover it. 00:29:41.980 [2024-04-26 15:10:27.443877] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.980 [2024-04-26 15:10:27.443983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.980 [2024-04-26 15:10:27.444032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.980 [2024-04-26 15:10:27.444050] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.980 [2024-04-26 15:10:27.444063] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.980 [2024-04-26 15:10:27.444104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-04-26 15:10:27.453898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.980 [2024-04-26 15:10:27.454026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.980 [2024-04-26 15:10:27.454052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.980 [2024-04-26 15:10:27.454068] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.980 [2024-04-26 15:10:27.454081] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.980 [2024-04-26 15:10:27.454110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-04-26 15:10:27.463926] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.980 [2024-04-26 15:10:27.464036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.980 [2024-04-26 15:10:27.464062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.980 [2024-04-26 15:10:27.464076] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.980 [2024-04-26 15:10:27.464089] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.980 [2024-04-26 15:10:27.464118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-04-26 15:10:27.473951] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.980 [2024-04-26 15:10:27.474066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.980 [2024-04-26 15:10:27.474090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.980 [2024-04-26 15:10:27.474105] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.980 [2024-04-26 15:10:27.474118] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.980 [2024-04-26 15:10:27.474152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-04-26 15:10:27.483961] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.980 [2024-04-26 15:10:27.484087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.980 [2024-04-26 15:10:27.484114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.980 [2024-04-26 15:10:27.484130] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.980 [2024-04-26 15:10:27.484142] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.980 [2024-04-26 15:10:27.484182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-04-26 15:10:27.494053] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.980 [2024-04-26 15:10:27.494169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.980 [2024-04-26 15:10:27.494195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.980 [2024-04-26 15:10:27.494211] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.980 [2024-04-26 15:10:27.494223] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.980 [2024-04-26 15:10:27.494253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-04-26 15:10:27.504051] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.980 [2024-04-26 15:10:27.504165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.980 [2024-04-26 15:10:27.504191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.980 [2024-04-26 15:10:27.504206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.980 [2024-04-26 15:10:27.504220] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.980 [2024-04-26 15:10:27.504249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-04-26 15:10:27.514079] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.980 [2024-04-26 15:10:27.514191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.980 [2024-04-26 15:10:27.514218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.980 [2024-04-26 15:10:27.514233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.980 [2024-04-26 15:10:27.514246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.980 [2024-04-26 15:10:27.514276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-04-26 15:10:27.524154] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.980 [2024-04-26 15:10:27.524266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.980 [2024-04-26 15:10:27.524312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.980 [2024-04-26 15:10:27.524329] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.980 [2024-04-26 15:10:27.524342] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.980 [2024-04-26 15:10:27.524370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-04-26 15:10:27.534128] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.980 [2024-04-26 15:10:27.534233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.980 [2024-04-26 15:10:27.534260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.980 [2024-04-26 15:10:27.534275] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.980 [2024-04-26 15:10:27.534288] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.980 [2024-04-26 15:10:27.534316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-04-26 15:10:27.544177] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.980 [2024-04-26 15:10:27.544279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.980 [2024-04-26 15:10:27.544318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.980 [2024-04-26 15:10:27.544333] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.980 [2024-04-26 15:10:27.544345] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.980 [2024-04-26 15:10:27.544374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-04-26 15:10:27.554203] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.980 [2024-04-26 15:10:27.554338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.980 [2024-04-26 15:10:27.554365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.980 [2024-04-26 15:10:27.554381] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.981 [2024-04-26 15:10:27.554393] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.981 [2024-04-26 15:10:27.554432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-04-26 15:10:27.564188] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.981 [2024-04-26 15:10:27.564325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.981 [2024-04-26 15:10:27.564351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.981 [2024-04-26 15:10:27.564365] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.981 [2024-04-26 15:10:27.564378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.981 [2024-04-26 15:10:27.564413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-04-26 15:10:27.574239] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.981 [2024-04-26 15:10:27.574343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.981 [2024-04-26 15:10:27.574384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.981 [2024-04-26 15:10:27.574399] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.981 [2024-04-26 15:10:27.574412] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.981 [2024-04-26 15:10:27.574440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-04-26 15:10:27.584350] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.981 [2024-04-26 15:10:27.584453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.981 [2024-04-26 15:10:27.584477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.981 [2024-04-26 15:10:27.584491] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.981 [2024-04-26 15:10:27.584504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.981 [2024-04-26 15:10:27.584532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-04-26 15:10:27.594285] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.981 [2024-04-26 15:10:27.594413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.981 [2024-04-26 15:10:27.594439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.981 [2024-04-26 15:10:27.594453] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.981 [2024-04-26 15:10:27.594466] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.981 [2024-04-26 15:10:27.594495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-04-26 15:10:27.604360] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.981 [2024-04-26 15:10:27.604473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.981 [2024-04-26 15:10:27.604498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.981 [2024-04-26 15:10:27.604525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.981 [2024-04-26 15:10:27.604539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.981 [2024-04-26 15:10:27.604568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-04-26 15:10:27.614406] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.981 [2024-04-26 15:10:27.614505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.981 [2024-04-26 15:10:27.614534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.981 [2024-04-26 15:10:27.614549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.981 [2024-04-26 15:10:27.614561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.981 [2024-04-26 15:10:27.614589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-04-26 15:10:27.624374] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.981 [2024-04-26 15:10:27.624481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.981 [2024-04-26 15:10:27.624504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.981 [2024-04-26 15:10:27.624519] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.981 [2024-04-26 15:10:27.624531] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.981 [2024-04-26 15:10:27.624559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-04-26 15:10:27.634409] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.981 [2024-04-26 15:10:27.634523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.981 [2024-04-26 15:10:27.634548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.981 [2024-04-26 15:10:27.634563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.981 [2024-04-26 15:10:27.634576] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.981 [2024-04-26 15:10:27.634604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-04-26 15:10:27.644446] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.981 [2024-04-26 15:10:27.644552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.981 [2024-04-26 15:10:27.644578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.981 [2024-04-26 15:10:27.644592] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.981 [2024-04-26 15:10:27.644605] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.981 [2024-04-26 15:10:27.644632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-04-26 15:10:27.654471] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.981 [2024-04-26 15:10:27.654577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.981 [2024-04-26 15:10:27.654601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.981 [2024-04-26 15:10:27.654615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.981 [2024-04-26 15:10:27.654628] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.981 [2024-04-26 15:10:27.654660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-04-26 15:10:27.664486] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.981 [2024-04-26 15:10:27.664608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.981 [2024-04-26 15:10:27.664635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.981 [2024-04-26 15:10:27.664649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.981 [2024-04-26 15:10:27.664662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.981 [2024-04-26 15:10:27.664690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-04-26 15:10:27.674534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.981 [2024-04-26 15:10:27.674640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.981 [2024-04-26 15:10:27.674666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.981 [2024-04-26 15:10:27.674681] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.981 [2024-04-26 15:10:27.674693] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.981 [2024-04-26 15:10:27.674722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-04-26 15:10:27.684590] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.981 [2024-04-26 15:10:27.684727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.981 [2024-04-26 15:10:27.684752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.981 [2024-04-26 15:10:27.684767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.981 [2024-04-26 15:10:27.684779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.981 [2024-04-26 15:10:27.684811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-04-26 15:10:27.694610] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.981 [2024-04-26 15:10:27.694706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.982 [2024-04-26 15:10:27.694730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.982 [2024-04-26 15:10:27.694744] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.982 [2024-04-26 15:10:27.694757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.982 [2024-04-26 15:10:27.694785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-04-26 15:10:27.704658] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.982 [2024-04-26 15:10:27.704793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.982 [2024-04-26 15:10:27.704823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.982 [2024-04-26 15:10:27.704839] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.982 [2024-04-26 15:10:27.704864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.982 [2024-04-26 15:10:27.704891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-04-26 15:10:27.714699] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.982 [2024-04-26 15:10:27.714809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.982 [2024-04-26 15:10:27.714835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.982 [2024-04-26 15:10:27.714850] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.982 [2024-04-26 15:10:27.714863] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:41.982 [2024-04-26 15:10:27.714893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.982 qpair failed and we were unable to recover it. 00:29:42.240 [2024-04-26 15:10:27.724683] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.240 [2024-04-26 15:10:27.724826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.240 [2024-04-26 15:10:27.724860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.240 [2024-04-26 15:10:27.724876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.240 [2024-04-26 15:10:27.724890] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.240 [2024-04-26 15:10:27.724919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.240 qpair failed and we were unable to recover it. 00:29:42.240 [2024-04-26 15:10:27.734739] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.240 [2024-04-26 15:10:27.734867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.240 [2024-04-26 15:10:27.734902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.240 [2024-04-26 15:10:27.734918] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.240 [2024-04-26 15:10:27.734931] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.240 [2024-04-26 15:10:27.734959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.240 qpair failed and we were unable to recover it. 00:29:42.240 [2024-04-26 15:10:27.744723] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.240 [2024-04-26 15:10:27.744830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.240 [2024-04-26 15:10:27.744854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.240 [2024-04-26 15:10:27.744868] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.240 [2024-04-26 15:10:27.744885] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.240 [2024-04-26 15:10:27.744926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.240 qpair failed and we were unable to recover it. 00:29:42.240 [2024-04-26 15:10:27.754758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.240 [2024-04-26 15:10:27.754857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.240 [2024-04-26 15:10:27.754881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.240 [2024-04-26 15:10:27.754895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.240 [2024-04-26 15:10:27.754907] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.240 [2024-04-26 15:10:27.754935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.240 qpair failed and we were unable to recover it. 00:29:42.240 [2024-04-26 15:10:27.764836] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.240 [2024-04-26 15:10:27.764941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.240 [2024-04-26 15:10:27.764965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.240 [2024-04-26 15:10:27.764980] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.240 [2024-04-26 15:10:27.764992] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.240 [2024-04-26 15:10:27.765046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.240 qpair failed and we were unable to recover it. 00:29:42.240 [2024-04-26 15:10:27.774835] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.240 [2024-04-26 15:10:27.774985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.240 [2024-04-26 15:10:27.775040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.240 [2024-04-26 15:10:27.775057] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.240 [2024-04-26 15:10:27.775071] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.240 [2024-04-26 15:10:27.775101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.240 qpair failed and we were unable to recover it. 00:29:42.240 [2024-04-26 15:10:27.784903] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.240 [2024-04-26 15:10:27.785078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.240 [2024-04-26 15:10:27.785106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.240 [2024-04-26 15:10:27.785122] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.240 [2024-04-26 15:10:27.785135] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.240 [2024-04-26 15:10:27.785166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.240 qpair failed and we were unable to recover it. 00:29:42.240 [2024-04-26 15:10:27.794862] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.240 [2024-04-26 15:10:27.794980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.240 [2024-04-26 15:10:27.795027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.240 [2024-04-26 15:10:27.795047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.240 [2024-04-26 15:10:27.795060] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.240 [2024-04-26 15:10:27.795091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.240 qpair failed and we were unable to recover it. 00:29:42.240 [2024-04-26 15:10:27.804956] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.240 [2024-04-26 15:10:27.805113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.240 [2024-04-26 15:10:27.805152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.240 [2024-04-26 15:10:27.805167] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.240 [2024-04-26 15:10:27.805180] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.240 [2024-04-26 15:10:27.805210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.240 qpair failed and we were unable to recover it. 00:29:42.240 [2024-04-26 15:10:27.814956] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.240 [2024-04-26 15:10:27.815080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.240 [2024-04-26 15:10:27.815117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.240 [2024-04-26 15:10:27.815132] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.240 [2024-04-26 15:10:27.815146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.240 [2024-04-26 15:10:27.815176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.240 qpair failed and we were unable to recover it. 00:29:42.240 [2024-04-26 15:10:27.824976] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.240 [2024-04-26 15:10:27.825097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.240 [2024-04-26 15:10:27.825121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.240 [2024-04-26 15:10:27.825136] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.240 [2024-04-26 15:10:27.825157] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.240 [2024-04-26 15:10:27.825186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.240 qpair failed and we were unable to recover it. 00:29:42.240 [2024-04-26 15:10:27.835050] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.240 [2024-04-26 15:10:27.835174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.240 [2024-04-26 15:10:27.835212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.240 [2024-04-26 15:10:27.835228] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.240 [2024-04-26 15:10:27.835247] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.240 [2024-04-26 15:10:27.835277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.240 qpair failed and we were unable to recover it. 00:29:42.240 [2024-04-26 15:10:27.845144] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.240 [2024-04-26 15:10:27.845254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.240 [2024-04-26 15:10:27.845279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.240 [2024-04-26 15:10:27.845294] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.240 [2024-04-26 15:10:27.845307] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.240 [2024-04-26 15:10:27.845351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.240 qpair failed and we were unable to recover it. 00:29:42.240 [2024-04-26 15:10:27.855102] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.240 [2024-04-26 15:10:27.855255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.240 [2024-04-26 15:10:27.855282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.240 [2024-04-26 15:10:27.855299] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.240 [2024-04-26 15:10:27.855312] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.240 [2024-04-26 15:10:27.855341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.240 qpair failed and we were unable to recover it. 00:29:42.240 [2024-04-26 15:10:27.865079] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.240 [2024-04-26 15:10:27.865194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.240 [2024-04-26 15:10:27.865221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.240 [2024-04-26 15:10:27.865236] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.240 [2024-04-26 15:10:27.865249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.240 [2024-04-26 15:10:27.865278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.240 qpair failed and we were unable to recover it. 00:29:42.240 [2024-04-26 15:10:27.875147] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.240 [2024-04-26 15:10:27.875269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.240 [2024-04-26 15:10:27.875293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.240 [2024-04-26 15:10:27.875321] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.240 [2024-04-26 15:10:27.875334] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.240 [2024-04-26 15:10:27.875362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.240 qpair failed and we were unable to recover it. 00:29:42.240 [2024-04-26 15:10:27.885150] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.240 [2024-04-26 15:10:27.885266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.240 [2024-04-26 15:10:27.885293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.240 [2024-04-26 15:10:27.885322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.240 [2024-04-26 15:10:27.885336] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.240 [2024-04-26 15:10:27.885365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.240 qpair failed and we were unable to recover it. 00:29:42.240 [2024-04-26 15:10:27.895162] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.240 [2024-04-26 15:10:27.895269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.240 [2024-04-26 15:10:27.895298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.240 [2024-04-26 15:10:27.895313] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.240 [2024-04-26 15:10:27.895326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.240 [2024-04-26 15:10:27.895369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.240 qpair failed and we were unable to recover it. 00:29:42.240 [2024-04-26 15:10:27.905195] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.241 [2024-04-26 15:10:27.905299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.241 [2024-04-26 15:10:27.905340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.241 [2024-04-26 15:10:27.905355] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.241 [2024-04-26 15:10:27.905367] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.241 [2024-04-26 15:10:27.905395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.241 qpair failed and we were unable to recover it. 00:29:42.241 [2024-04-26 15:10:27.915218] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.241 [2024-04-26 15:10:27.915338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.241 [2024-04-26 15:10:27.915363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.241 [2024-04-26 15:10:27.915378] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.241 [2024-04-26 15:10:27.915389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.241 [2024-04-26 15:10:27.915417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.241 qpair failed and we were unable to recover it. 00:29:42.241 [2024-04-26 15:10:27.925262] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.241 [2024-04-26 15:10:27.925370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.241 [2024-04-26 15:10:27.925410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.241 [2024-04-26 15:10:27.925424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.241 [2024-04-26 15:10:27.925442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.241 [2024-04-26 15:10:27.925471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.241 qpair failed and we were unable to recover it. 00:29:42.241 [2024-04-26 15:10:27.935289] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.241 [2024-04-26 15:10:27.935414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.241 [2024-04-26 15:10:27.935440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.241 [2024-04-26 15:10:27.935455] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.241 [2024-04-26 15:10:27.935467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.241 [2024-04-26 15:10:27.935495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.241 qpair failed and we were unable to recover it. 00:29:42.241 [2024-04-26 15:10:27.945331] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.241 [2024-04-26 15:10:27.945435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.241 [2024-04-26 15:10:27.945461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.241 [2024-04-26 15:10:27.945476] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.241 [2024-04-26 15:10:27.945488] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.241 [2024-04-26 15:10:27.945516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.241 qpair failed and we were unable to recover it. 00:29:42.241 [2024-04-26 15:10:27.955410] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.241 [2024-04-26 15:10:27.955508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.241 [2024-04-26 15:10:27.955533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.241 [2024-04-26 15:10:27.955547] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.241 [2024-04-26 15:10:27.955559] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.241 [2024-04-26 15:10:27.955587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.241 qpair failed and we were unable to recover it. 00:29:42.241 [2024-04-26 15:10:27.965397] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.241 [2024-04-26 15:10:27.965508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.241 [2024-04-26 15:10:27.965534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.241 [2024-04-26 15:10:27.965549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.241 [2024-04-26 15:10:27.965561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.241 [2024-04-26 15:10:27.965589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.241 qpair failed and we were unable to recover it. 00:29:42.241 [2024-04-26 15:10:27.975389] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.241 [2024-04-26 15:10:27.975496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.241 [2024-04-26 15:10:27.975522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.241 [2024-04-26 15:10:27.975537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.241 [2024-04-26 15:10:27.975549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.241 [2024-04-26 15:10:27.975577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.241 qpair failed and we were unable to recover it. 00:29:42.499 [2024-04-26 15:10:27.985420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.499 [2024-04-26 15:10:27.985529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.499 [2024-04-26 15:10:27.985555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.499 [2024-04-26 15:10:27.985569] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.499 [2024-04-26 15:10:27.985581] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.499 [2024-04-26 15:10:27.985609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-04-26 15:10:27.995456] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.499 [2024-04-26 15:10:27.995552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.499 [2024-04-26 15:10:27.995579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.499 [2024-04-26 15:10:27.995593] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.499 [2024-04-26 15:10:27.995605] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.499 [2024-04-26 15:10:27.995633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-04-26 15:10:28.005577] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.499 [2024-04-26 15:10:28.005689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.499 [2024-04-26 15:10:28.005715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.499 [2024-04-26 15:10:28.005730] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.499 [2024-04-26 15:10:28.005742] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.499 [2024-04-26 15:10:28.005770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-04-26 15:10:28.015538] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.499 [2024-04-26 15:10:28.015641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.499 [2024-04-26 15:10:28.015667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.499 [2024-04-26 15:10:28.015686] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.499 [2024-04-26 15:10:28.015699] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.499 [2024-04-26 15:10:28.015727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-04-26 15:10:28.025532] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.499 [2024-04-26 15:10:28.025667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.499 [2024-04-26 15:10:28.025693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.499 [2024-04-26 15:10:28.025708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.499 [2024-04-26 15:10:28.025720] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.499 [2024-04-26 15:10:28.025748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-04-26 15:10:28.035554] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.499 [2024-04-26 15:10:28.035657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.499 [2024-04-26 15:10:28.035683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.499 [2024-04-26 15:10:28.035698] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.499 [2024-04-26 15:10:28.035710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.499 [2024-04-26 15:10:28.035737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-04-26 15:10:28.045788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.499 [2024-04-26 15:10:28.045912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.499 [2024-04-26 15:10:28.045936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.499 [2024-04-26 15:10:28.045950] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.499 [2024-04-26 15:10:28.045962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.499 [2024-04-26 15:10:28.045991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-04-26 15:10:28.055741] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.499 [2024-04-26 15:10:28.055842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.499 [2024-04-26 15:10:28.055866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.499 [2024-04-26 15:10:28.055880] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.499 [2024-04-26 15:10:28.055893] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.499 [2024-04-26 15:10:28.055920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-04-26 15:10:28.065711] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.499 [2024-04-26 15:10:28.065817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.499 [2024-04-26 15:10:28.065844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.499 [2024-04-26 15:10:28.065859] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.499 [2024-04-26 15:10:28.065870] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.499 [2024-04-26 15:10:28.065899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-04-26 15:10:28.075743] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.499 [2024-04-26 15:10:28.075840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.499 [2024-04-26 15:10:28.075866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.499 [2024-04-26 15:10:28.075880] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.500 [2024-04-26 15:10:28.075892] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.500 [2024-04-26 15:10:28.075920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-04-26 15:10:28.085714] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.500 [2024-04-26 15:10:28.085870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.500 [2024-04-26 15:10:28.085896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.500 [2024-04-26 15:10:28.085910] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.500 [2024-04-26 15:10:28.085922] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.500 [2024-04-26 15:10:28.085950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-04-26 15:10:28.095740] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.500 [2024-04-26 15:10:28.095838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.500 [2024-04-26 15:10:28.095862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.500 [2024-04-26 15:10:28.095876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.500 [2024-04-26 15:10:28.095889] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.500 [2024-04-26 15:10:28.095916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-04-26 15:10:28.105757] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.500 [2024-04-26 15:10:28.105857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.500 [2024-04-26 15:10:28.105883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.500 [2024-04-26 15:10:28.105902] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.500 [2024-04-26 15:10:28.105916] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.500 [2024-04-26 15:10:28.105944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-04-26 15:10:28.115834] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.500 [2024-04-26 15:10:28.115935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.500 [2024-04-26 15:10:28.115962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.500 [2024-04-26 15:10:28.115977] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.500 [2024-04-26 15:10:28.115989] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.500 [2024-04-26 15:10:28.116041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-04-26 15:10:28.125847] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.500 [2024-04-26 15:10:28.125957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.500 [2024-04-26 15:10:28.125983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.500 [2024-04-26 15:10:28.125998] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.500 [2024-04-26 15:10:28.126040] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.500 [2024-04-26 15:10:28.126075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-04-26 15:10:28.135872] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.500 [2024-04-26 15:10:28.135976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.500 [2024-04-26 15:10:28.136017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.500 [2024-04-26 15:10:28.136041] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.500 [2024-04-26 15:10:28.136054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.500 [2024-04-26 15:10:28.136083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-04-26 15:10:28.145932] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.500 [2024-04-26 15:10:28.146047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.500 [2024-04-26 15:10:28.146072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.500 [2024-04-26 15:10:28.146086] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.500 [2024-04-26 15:10:28.146099] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.500 [2024-04-26 15:10:28.146129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-04-26 15:10:28.155895] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.500 [2024-04-26 15:10:28.155989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.500 [2024-04-26 15:10:28.156041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.500 [2024-04-26 15:10:28.156062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.500 [2024-04-26 15:10:28.156075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.500 [2024-04-26 15:10:28.156104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-04-26 15:10:28.165920] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.500 [2024-04-26 15:10:28.166066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.500 [2024-04-26 15:10:28.166093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.500 [2024-04-26 15:10:28.166109] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.500 [2024-04-26 15:10:28.166121] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.500 [2024-04-26 15:10:28.166150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-04-26 15:10:28.175972] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.500 [2024-04-26 15:10:28.176108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.500 [2024-04-26 15:10:28.176136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.500 [2024-04-26 15:10:28.176151] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.500 [2024-04-26 15:10:28.176165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.500 [2024-04-26 15:10:28.176195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-04-26 15:10:28.186012] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.500 [2024-04-26 15:10:28.186170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.500 [2024-04-26 15:10:28.186197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.500 [2024-04-26 15:10:28.186212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.500 [2024-04-26 15:10:28.186226] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.500 [2024-04-26 15:10:28.186255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-04-26 15:10:28.196091] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.500 [2024-04-26 15:10:28.196201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.500 [2024-04-26 15:10:28.196229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.500 [2024-04-26 15:10:28.196249] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.500 [2024-04-26 15:10:28.196262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.500 [2024-04-26 15:10:28.196291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-04-26 15:10:28.206087] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.500 [2024-04-26 15:10:28.206199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.500 [2024-04-26 15:10:28.206226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.500 [2024-04-26 15:10:28.206241] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.500 [2024-04-26 15:10:28.206254] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.500 [2024-04-26 15:10:28.206283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.501 [2024-04-26 15:10:28.216081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.501 [2024-04-26 15:10:28.216217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.501 [2024-04-26 15:10:28.216244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.501 [2024-04-26 15:10:28.216259] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.501 [2024-04-26 15:10:28.216271] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.501 [2024-04-26 15:10:28.216301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-04-26 15:10:28.226200] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.501 [2024-04-26 15:10:28.226306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.501 [2024-04-26 15:10:28.226334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.501 [2024-04-26 15:10:28.226349] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.501 [2024-04-26 15:10:28.226377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.501 [2024-04-26 15:10:28.226405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-04-26 15:10:28.236125] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.501 [2024-04-26 15:10:28.236228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.501 [2024-04-26 15:10:28.236255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.501 [2024-04-26 15:10:28.236270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.501 [2024-04-26 15:10:28.236283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.501 [2024-04-26 15:10:28.236326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.758 [2024-04-26 15:10:28.246213] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.758 [2024-04-26 15:10:28.246321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.758 [2024-04-26 15:10:28.246362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.758 [2024-04-26 15:10:28.246377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.758 [2024-04-26 15:10:28.246389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.758 [2024-04-26 15:10:28.246418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.758 qpair failed and we were unable to recover it. 00:29:42.758 [2024-04-26 15:10:28.256265] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.758 [2024-04-26 15:10:28.256427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.758 [2024-04-26 15:10:28.256453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.758 [2024-04-26 15:10:28.256468] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.758 [2024-04-26 15:10:28.256480] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.758 [2024-04-26 15:10:28.256508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.758 qpair failed and we were unable to recover it. 00:29:42.758 [2024-04-26 15:10:28.266248] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.758 [2024-04-26 15:10:28.266343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.758 [2024-04-26 15:10:28.266369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.758 [2024-04-26 15:10:28.266383] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.758 [2024-04-26 15:10:28.266396] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.758 [2024-04-26 15:10:28.266425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.758 qpair failed and we were unable to recover it. 00:29:42.758 [2024-04-26 15:10:28.276266] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.758 [2024-04-26 15:10:28.276383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.758 [2024-04-26 15:10:28.276409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.758 [2024-04-26 15:10:28.276424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.758 [2024-04-26 15:10:28.276437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.758 [2024-04-26 15:10:28.276464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.758 qpair failed and we were unable to recover it. 00:29:42.758 [2024-04-26 15:10:28.286381] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.758 [2024-04-26 15:10:28.286504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.758 [2024-04-26 15:10:28.286534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.758 [2024-04-26 15:10:28.286549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.758 [2024-04-26 15:10:28.286562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.758 [2024-04-26 15:10:28.286590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.758 qpair failed and we were unable to recover it. 00:29:42.758 [2024-04-26 15:10:28.296360] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.758 [2024-04-26 15:10:28.296466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.758 [2024-04-26 15:10:28.296491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.758 [2024-04-26 15:10:28.296506] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.758 [2024-04-26 15:10:28.296518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.758 [2024-04-26 15:10:28.296547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.758 qpair failed and we were unable to recover it. 00:29:42.758 [2024-04-26 15:10:28.306431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.758 [2024-04-26 15:10:28.306530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.758 [2024-04-26 15:10:28.306556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.758 [2024-04-26 15:10:28.306570] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.758 [2024-04-26 15:10:28.306583] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.758 [2024-04-26 15:10:28.306611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.758 qpair failed and we were unable to recover it. 00:29:42.758 [2024-04-26 15:10:28.316385] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.758 [2024-04-26 15:10:28.316522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.758 [2024-04-26 15:10:28.316548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.758 [2024-04-26 15:10:28.316563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.758 [2024-04-26 15:10:28.316575] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.758 [2024-04-26 15:10:28.316603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.758 qpair failed and we were unable to recover it. 00:29:42.758 [2024-04-26 15:10:28.326447] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.758 [2024-04-26 15:10:28.326551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.758 [2024-04-26 15:10:28.326577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.758 [2024-04-26 15:10:28.326592] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.758 [2024-04-26 15:10:28.326605] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.758 [2024-04-26 15:10:28.326633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.758 qpair failed and we were unable to recover it. 00:29:42.758 [2024-04-26 15:10:28.336453] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.758 [2024-04-26 15:10:28.336556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.758 [2024-04-26 15:10:28.336582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.758 [2024-04-26 15:10:28.336596] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.758 [2024-04-26 15:10:28.336608] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.758 [2024-04-26 15:10:28.336636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.758 qpair failed and we were unable to recover it. 00:29:42.758 [2024-04-26 15:10:28.346460] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.758 [2024-04-26 15:10:28.346564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.758 [2024-04-26 15:10:28.346591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.758 [2024-04-26 15:10:28.346606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.758 [2024-04-26 15:10:28.346618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.758 [2024-04-26 15:10:28.346646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.758 qpair failed and we were unable to recover it. 00:29:42.758 [2024-04-26 15:10:28.356485] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.758 [2024-04-26 15:10:28.356607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.758 [2024-04-26 15:10:28.356633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.758 [2024-04-26 15:10:28.356648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.758 [2024-04-26 15:10:28.356661] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.758 [2024-04-26 15:10:28.356689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.758 qpair failed and we were unable to recover it. 00:29:42.758 [2024-04-26 15:10:28.366529] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.758 [2024-04-26 15:10:28.366636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.758 [2024-04-26 15:10:28.366661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.758 [2024-04-26 15:10:28.366676] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.758 [2024-04-26 15:10:28.366688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.758 [2024-04-26 15:10:28.366717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.758 qpair failed and we were unable to recover it. 00:29:42.758 [2024-04-26 15:10:28.376555] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.758 [2024-04-26 15:10:28.376713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.758 [2024-04-26 15:10:28.376746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.758 [2024-04-26 15:10:28.376761] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.758 [2024-04-26 15:10:28.376773] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.758 [2024-04-26 15:10:28.376802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.758 qpair failed and we were unable to recover it. 00:29:42.758 [2024-04-26 15:10:28.386608] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.758 [2024-04-26 15:10:28.386703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.758 [2024-04-26 15:10:28.386730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.758 [2024-04-26 15:10:28.386744] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.758 [2024-04-26 15:10:28.386757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.758 [2024-04-26 15:10:28.386785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.758 qpair failed and we were unable to recover it. 00:29:42.758 [2024-04-26 15:10:28.396634] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.758 [2024-04-26 15:10:28.396742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.759 [2024-04-26 15:10:28.396768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.759 [2024-04-26 15:10:28.396783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.759 [2024-04-26 15:10:28.396795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.759 [2024-04-26 15:10:28.396823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.759 qpair failed and we were unable to recover it. 00:29:42.759 [2024-04-26 15:10:28.406633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.759 [2024-04-26 15:10:28.406733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.759 [2024-04-26 15:10:28.406758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.759 [2024-04-26 15:10:28.406773] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.759 [2024-04-26 15:10:28.406785] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.759 [2024-04-26 15:10:28.406813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.759 qpair failed and we were unable to recover it. 00:29:42.759 [2024-04-26 15:10:28.416659] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.759 [2024-04-26 15:10:28.416783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.759 [2024-04-26 15:10:28.416809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.759 [2024-04-26 15:10:28.416824] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.759 [2024-04-26 15:10:28.416836] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.759 [2024-04-26 15:10:28.416869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.759 qpair failed and we were unable to recover it. 00:29:42.759 [2024-04-26 15:10:28.426674] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.759 [2024-04-26 15:10:28.426805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.759 [2024-04-26 15:10:28.426831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.759 [2024-04-26 15:10:28.426846] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.759 [2024-04-26 15:10:28.426858] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.759 [2024-04-26 15:10:28.426886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.759 qpair failed and we were unable to recover it. 00:29:42.759 [2024-04-26 15:10:28.436774] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.759 [2024-04-26 15:10:28.436884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.759 [2024-04-26 15:10:28.436911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.759 [2024-04-26 15:10:28.436925] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.759 [2024-04-26 15:10:28.436938] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.759 [2024-04-26 15:10:28.436966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.759 qpair failed and we were unable to recover it. 00:29:42.759 [2024-04-26 15:10:28.446801] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.759 [2024-04-26 15:10:28.446903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.759 [2024-04-26 15:10:28.446929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.759 [2024-04-26 15:10:28.446944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.759 [2024-04-26 15:10:28.446956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.759 [2024-04-26 15:10:28.446985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.759 qpair failed and we were unable to recover it. 00:29:42.759 [2024-04-26 15:10:28.456824] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.759 [2024-04-26 15:10:28.456940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.759 [2024-04-26 15:10:28.456967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.759 [2024-04-26 15:10:28.456982] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.759 [2024-04-26 15:10:28.456994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.759 [2024-04-26 15:10:28.457051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.759 qpair failed and we were unable to recover it. 00:29:42.759 [2024-04-26 15:10:28.466841] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.759 [2024-04-26 15:10:28.466946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.759 [2024-04-26 15:10:28.466977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.759 [2024-04-26 15:10:28.466993] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.759 [2024-04-26 15:10:28.467026] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.759 [2024-04-26 15:10:28.467070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.759 qpair failed and we were unable to recover it. 00:29:42.759 [2024-04-26 15:10:28.476904] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.759 [2024-04-26 15:10:28.477049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.759 [2024-04-26 15:10:28.477076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.759 [2024-04-26 15:10:28.477091] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.759 [2024-04-26 15:10:28.477104] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.759 [2024-04-26 15:10:28.477142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.759 qpair failed and we were unable to recover it. 00:29:42.759 [2024-04-26 15:10:28.486910] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.759 [2024-04-26 15:10:28.487038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.759 [2024-04-26 15:10:28.487065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.759 [2024-04-26 15:10:28.487080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.759 [2024-04-26 15:10:28.487092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.759 [2024-04-26 15:10:28.487121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.759 qpair failed and we were unable to recover it. 00:29:42.759 [2024-04-26 15:10:28.496957] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.759 [2024-04-26 15:10:28.497098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.759 [2024-04-26 15:10:28.497125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.759 [2024-04-26 15:10:28.497141] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.759 [2024-04-26 15:10:28.497154] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:42.759 [2024-04-26 15:10:28.497182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.759 qpair failed and we were unable to recover it. 00:29:43.016 [2024-04-26 15:10:28.506960] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.016 [2024-04-26 15:10:28.507092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.016 [2024-04-26 15:10:28.507119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.017 [2024-04-26 15:10:28.507135] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.017 [2024-04-26 15:10:28.507147] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.017 [2024-04-26 15:10:28.507181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.017 qpair failed and we were unable to recover it. 00:29:43.017 [2024-04-26 15:10:28.516991] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.017 [2024-04-26 15:10:28.517121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.017 [2024-04-26 15:10:28.517148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.017 [2024-04-26 15:10:28.517163] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.017 [2024-04-26 15:10:28.517175] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.017 [2024-04-26 15:10:28.517205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.017 qpair failed and we were unable to recover it. 00:29:43.017 [2024-04-26 15:10:28.527059] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.017 [2024-04-26 15:10:28.527204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.017 [2024-04-26 15:10:28.527231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.017 [2024-04-26 15:10:28.527246] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.017 [2024-04-26 15:10:28.527258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.017 [2024-04-26 15:10:28.527287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.017 qpair failed and we were unable to recover it. 00:29:43.017 [2024-04-26 15:10:28.537060] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.017 [2024-04-26 15:10:28.537210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.017 [2024-04-26 15:10:28.537237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.017 [2024-04-26 15:10:28.537252] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.017 [2024-04-26 15:10:28.537265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.017 [2024-04-26 15:10:28.537293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.017 qpair failed and we were unable to recover it. 00:29:43.017 [2024-04-26 15:10:28.547067] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.017 [2024-04-26 15:10:28.547198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.017 [2024-04-26 15:10:28.547224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.017 [2024-04-26 15:10:28.547240] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.017 [2024-04-26 15:10:28.547253] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.017 [2024-04-26 15:10:28.547282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.017 qpair failed and we were unable to recover it. 00:29:43.017 [2024-04-26 15:10:28.557107] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.017 [2024-04-26 15:10:28.557210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.017 [2024-04-26 15:10:28.557241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.017 [2024-04-26 15:10:28.557257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.017 [2024-04-26 15:10:28.557270] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.017 [2024-04-26 15:10:28.557299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.017 qpair failed and we were unable to recover it. 00:29:43.017 [2024-04-26 15:10:28.567151] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.017 [2024-04-26 15:10:28.567258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.017 [2024-04-26 15:10:28.567284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.017 [2024-04-26 15:10:28.567300] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.017 [2024-04-26 15:10:28.567328] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.017 [2024-04-26 15:10:28.567356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.017 qpair failed and we were unable to recover it. 00:29:43.017 [2024-04-26 15:10:28.577161] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.017 [2024-04-26 15:10:28.577271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.017 [2024-04-26 15:10:28.577298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.017 [2024-04-26 15:10:28.577321] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.017 [2024-04-26 15:10:28.577349] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.017 [2024-04-26 15:10:28.577378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.017 qpair failed and we were unable to recover it. 00:29:43.017 [2024-04-26 15:10:28.587243] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.017 [2024-04-26 15:10:28.587367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.017 [2024-04-26 15:10:28.587393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.017 [2024-04-26 15:10:28.587408] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.017 [2024-04-26 15:10:28.587420] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.017 [2024-04-26 15:10:28.587460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.017 qpair failed and we were unable to recover it. 00:29:43.017 [2024-04-26 15:10:28.597250] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.017 [2024-04-26 15:10:28.597371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.017 [2024-04-26 15:10:28.597397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.017 [2024-04-26 15:10:28.597412] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.017 [2024-04-26 15:10:28.597424] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.017 [2024-04-26 15:10:28.597458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.017 qpair failed and we were unable to recover it. 00:29:43.017 [2024-04-26 15:10:28.607356] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.017 [2024-04-26 15:10:28.607478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.017 [2024-04-26 15:10:28.607506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.017 [2024-04-26 15:10:28.607520] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.017 [2024-04-26 15:10:28.607532] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.017 [2024-04-26 15:10:28.607560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.017 qpair failed and we were unable to recover it. 00:29:43.017 [2024-04-26 15:10:28.617316] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.017 [2024-04-26 15:10:28.617446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.017 [2024-04-26 15:10:28.617471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.017 [2024-04-26 15:10:28.617486] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.017 [2024-04-26 15:10:28.617498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.017 [2024-04-26 15:10:28.617537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.017 qpair failed and we were unable to recover it. 00:29:43.017 [2024-04-26 15:10:28.627420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.017 [2024-04-26 15:10:28.627531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.017 [2024-04-26 15:10:28.627557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.017 [2024-04-26 15:10:28.627572] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.017 [2024-04-26 15:10:28.627584] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.017 [2024-04-26 15:10:28.627612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.017 qpair failed and we were unable to recover it. 00:29:43.017 [2024-04-26 15:10:28.637337] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.017 [2024-04-26 15:10:28.637493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.017 [2024-04-26 15:10:28.637518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.017 [2024-04-26 15:10:28.637545] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.017 [2024-04-26 15:10:28.637557] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.018 [2024-04-26 15:10:28.637585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.018 qpair failed and we were unable to recover it. 00:29:43.018 [2024-04-26 15:10:28.647399] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.018 [2024-04-26 15:10:28.647522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.018 [2024-04-26 15:10:28.647553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.018 [2024-04-26 15:10:28.647569] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.018 [2024-04-26 15:10:28.647580] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.018 [2024-04-26 15:10:28.647619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.018 qpair failed and we were unable to recover it. 00:29:43.018 [2024-04-26 15:10:28.657416] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.018 [2024-04-26 15:10:28.657522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.018 [2024-04-26 15:10:28.657548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.018 [2024-04-26 15:10:28.657563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.018 [2024-04-26 15:10:28.657575] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.018 [2024-04-26 15:10:28.657603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.018 qpair failed and we were unable to recover it. 00:29:43.018 [2024-04-26 15:10:28.667450] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.018 [2024-04-26 15:10:28.667557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.018 [2024-04-26 15:10:28.667582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.018 [2024-04-26 15:10:28.667597] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.018 [2024-04-26 15:10:28.667609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.018 [2024-04-26 15:10:28.667637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.018 qpair failed and we were unable to recover it. 00:29:43.018 [2024-04-26 15:10:28.677459] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.018 [2024-04-26 15:10:28.677599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.018 [2024-04-26 15:10:28.677625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.018 [2024-04-26 15:10:28.677640] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.018 [2024-04-26 15:10:28.677652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.018 [2024-04-26 15:10:28.677680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.018 qpair failed and we were unable to recover it. 00:29:43.018 [2024-04-26 15:10:28.687518] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.018 [2024-04-26 15:10:28.687630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.018 [2024-04-26 15:10:28.687656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.018 [2024-04-26 15:10:28.687670] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.018 [2024-04-26 15:10:28.687687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.018 [2024-04-26 15:10:28.687715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.018 qpair failed and we were unable to recover it. 00:29:43.018 [2024-04-26 15:10:28.697575] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.018 [2024-04-26 15:10:28.697677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.018 [2024-04-26 15:10:28.697703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.018 [2024-04-26 15:10:28.697718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.018 [2024-04-26 15:10:28.697730] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.018 [2024-04-26 15:10:28.697766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.018 qpair failed and we were unable to recover it. 00:29:43.018 [2024-04-26 15:10:28.707565] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.018 [2024-04-26 15:10:28.707685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.018 [2024-04-26 15:10:28.707711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.018 [2024-04-26 15:10:28.707726] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.018 [2024-04-26 15:10:28.707738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.018 [2024-04-26 15:10:28.707766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.018 qpair failed and we were unable to recover it. 00:29:43.018 [2024-04-26 15:10:28.717583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.018 [2024-04-26 15:10:28.717721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.018 [2024-04-26 15:10:28.717747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.018 [2024-04-26 15:10:28.717762] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.018 [2024-04-26 15:10:28.717774] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.018 [2024-04-26 15:10:28.717802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.018 qpair failed and we were unable to recover it. 00:29:43.018 [2024-04-26 15:10:28.727590] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.018 [2024-04-26 15:10:28.727705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.018 [2024-04-26 15:10:28.727731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.018 [2024-04-26 15:10:28.727745] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.018 [2024-04-26 15:10:28.727757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.018 [2024-04-26 15:10:28.727786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.018 qpair failed and we were unable to recover it. 00:29:43.018 [2024-04-26 15:10:28.737643] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.018 [2024-04-26 15:10:28.737786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.018 [2024-04-26 15:10:28.737811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.018 [2024-04-26 15:10:28.737826] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.018 [2024-04-26 15:10:28.737839] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.018 [2024-04-26 15:10:28.737866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.018 qpair failed and we were unable to recover it. 00:29:43.018 [2024-04-26 15:10:28.747616] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.018 [2024-04-26 15:10:28.747732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.018 [2024-04-26 15:10:28.747758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.018 [2024-04-26 15:10:28.747773] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.018 [2024-04-26 15:10:28.747785] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.018 [2024-04-26 15:10:28.747813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.018 qpair failed and we were unable to recover it. 00:29:43.278 [2024-04-26 15:10:28.757658] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.278 [2024-04-26 15:10:28.757765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.278 [2024-04-26 15:10:28.757790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.278 [2024-04-26 15:10:28.757805] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.278 [2024-04-26 15:10:28.757817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.278 [2024-04-26 15:10:28.757845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.278 qpair failed and we were unable to recover it. 00:29:43.278 [2024-04-26 15:10:28.767722] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.278 [2024-04-26 15:10:28.767856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.278 [2024-04-26 15:10:28.767882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.278 [2024-04-26 15:10:28.767897] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.278 [2024-04-26 15:10:28.767909] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.278 [2024-04-26 15:10:28.767937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.278 qpair failed and we were unable to recover it. 00:29:43.278 [2024-04-26 15:10:28.777781] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.278 [2024-04-26 15:10:28.777934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.278 [2024-04-26 15:10:28.777960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.278 [2024-04-26 15:10:28.777982] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.278 [2024-04-26 15:10:28.778014] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.278 [2024-04-26 15:10:28.778061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.278 qpair failed and we were unable to recover it. 00:29:43.278 [2024-04-26 15:10:28.787788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.278 [2024-04-26 15:10:28.787933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.278 [2024-04-26 15:10:28.787959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.278 [2024-04-26 15:10:28.787973] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.278 [2024-04-26 15:10:28.787985] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.278 [2024-04-26 15:10:28.788039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.278 qpair failed and we were unable to recover it. 00:29:43.278 [2024-04-26 15:10:28.797786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.278 [2024-04-26 15:10:28.797897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.278 [2024-04-26 15:10:28.797921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.278 [2024-04-26 15:10:28.797935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.278 [2024-04-26 15:10:28.797948] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.278 [2024-04-26 15:10:28.797977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.278 qpair failed and we were unable to recover it. 00:29:43.278 [2024-04-26 15:10:28.807845] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.278 [2024-04-26 15:10:28.807958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.278 [2024-04-26 15:10:28.807984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.278 [2024-04-26 15:10:28.807999] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.278 [2024-04-26 15:10:28.808058] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.278 [2024-04-26 15:10:28.808091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.278 qpair failed and we were unable to recover it. 00:29:43.278 [2024-04-26 15:10:28.817821] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.278 [2024-04-26 15:10:28.817926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.279 [2024-04-26 15:10:28.817952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.279 [2024-04-26 15:10:28.817966] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.279 [2024-04-26 15:10:28.817979] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.279 [2024-04-26 15:10:28.818044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.279 qpair failed and we were unable to recover it. 00:29:43.279 [2024-04-26 15:10:28.827901] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.279 [2024-04-26 15:10:28.828043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.279 [2024-04-26 15:10:28.828070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.279 [2024-04-26 15:10:28.828085] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.279 [2024-04-26 15:10:28.828097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.279 [2024-04-26 15:10:28.828138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.279 qpair failed and we were unable to recover it. 00:29:43.279 [2024-04-26 15:10:28.837918] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.279 [2024-04-26 15:10:28.838068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.279 [2024-04-26 15:10:28.838096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.279 [2024-04-26 15:10:28.838111] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.279 [2024-04-26 15:10:28.838124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.279 [2024-04-26 15:10:28.838153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.279 qpair failed and we were unable to recover it. 00:29:43.279 [2024-04-26 15:10:28.847959] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.279 [2024-04-26 15:10:28.848129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.279 [2024-04-26 15:10:28.848156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.279 [2024-04-26 15:10:28.848171] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.279 [2024-04-26 15:10:28.848184] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.279 [2024-04-26 15:10:28.848213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.279 qpair failed and we were unable to recover it. 00:29:43.279 [2024-04-26 15:10:28.858077] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.279 [2024-04-26 15:10:28.858195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.279 [2024-04-26 15:10:28.858221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.279 [2024-04-26 15:10:28.858236] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.279 [2024-04-26 15:10:28.858249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.279 [2024-04-26 15:10:28.858278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.279 qpair failed and we were unable to recover it. 00:29:43.279 [2024-04-26 15:10:28.868028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.279 [2024-04-26 15:10:28.868178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.279 [2024-04-26 15:10:28.868205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.279 [2024-04-26 15:10:28.868219] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.279 [2024-04-26 15:10:28.868237] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.279 [2024-04-26 15:10:28.868267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.279 qpair failed and we were unable to recover it. 00:29:43.279 [2024-04-26 15:10:28.878076] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.279 [2024-04-26 15:10:28.878207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.279 [2024-04-26 15:10:28.878232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.279 [2024-04-26 15:10:28.878246] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.279 [2024-04-26 15:10:28.878259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.279 [2024-04-26 15:10:28.878292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.279 qpair failed and we were unable to recover it. 00:29:43.279 [2024-04-26 15:10:28.888090] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.279 [2024-04-26 15:10:28.888207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.279 [2024-04-26 15:10:28.888234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.279 [2024-04-26 15:10:28.888249] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.279 [2024-04-26 15:10:28.888261] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.279 [2024-04-26 15:10:28.888290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.279 qpair failed and we were unable to recover it. 00:29:43.279 [2024-04-26 15:10:28.898087] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.279 [2024-04-26 15:10:28.898232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.279 [2024-04-26 15:10:28.898259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.279 [2024-04-26 15:10:28.898274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.279 [2024-04-26 15:10:28.898287] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.279 [2024-04-26 15:10:28.898340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.279 qpair failed and we were unable to recover it. 00:29:43.279 [2024-04-26 15:10:28.908118] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.279 [2024-04-26 15:10:28.908222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.279 [2024-04-26 15:10:28.908249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.279 [2024-04-26 15:10:28.908264] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.279 [2024-04-26 15:10:28.908276] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.279 [2024-04-26 15:10:28.908320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.279 qpair failed and we were unable to recover it. 00:29:43.279 [2024-04-26 15:10:28.918126] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.279 [2024-04-26 15:10:28.918241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.279 [2024-04-26 15:10:28.918269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.279 [2024-04-26 15:10:28.918284] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.279 [2024-04-26 15:10:28.918296] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.279 [2024-04-26 15:10:28.918340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.279 qpair failed and we were unable to recover it. 00:29:43.279 [2024-04-26 15:10:28.928197] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.279 [2024-04-26 15:10:28.928365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.279 [2024-04-26 15:10:28.928390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.279 [2024-04-26 15:10:28.928404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.279 [2024-04-26 15:10:28.928426] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.279 [2024-04-26 15:10:28.928454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.279 qpair failed and we were unable to recover it. 00:29:43.279 [2024-04-26 15:10:28.938223] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.279 [2024-04-26 15:10:28.938336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.279 [2024-04-26 15:10:28.938362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.279 [2024-04-26 15:10:28.938376] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.279 [2024-04-26 15:10:28.938389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.279 [2024-04-26 15:10:28.938417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.279 qpair failed and we were unable to recover it. 00:29:43.279 [2024-04-26 15:10:28.948295] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.279 [2024-04-26 15:10:28.948420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.280 [2024-04-26 15:10:28.948445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.280 [2024-04-26 15:10:28.948459] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.280 [2024-04-26 15:10:28.948471] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.280 [2024-04-26 15:10:28.948500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.280 qpair failed and we were unable to recover it. 00:29:43.280 [2024-04-26 15:10:28.958230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.280 [2024-04-26 15:10:28.958350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.280 [2024-04-26 15:10:28.958375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.280 [2024-04-26 15:10:28.958394] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.280 [2024-04-26 15:10:28.958407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.280 [2024-04-26 15:10:28.958435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.280 qpair failed and we were unable to recover it. 00:29:43.280 [2024-04-26 15:10:28.968311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.280 [2024-04-26 15:10:28.968428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.280 [2024-04-26 15:10:28.968455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.280 [2024-04-26 15:10:28.968469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.280 [2024-04-26 15:10:28.968481] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.280 [2024-04-26 15:10:28.968510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.280 qpair failed and we were unable to recover it. 00:29:43.280 [2024-04-26 15:10:28.978339] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.280 [2024-04-26 15:10:28.978483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.280 [2024-04-26 15:10:28.978507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.280 [2024-04-26 15:10:28.978522] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.280 [2024-04-26 15:10:28.978535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.280 [2024-04-26 15:10:28.978564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.280 qpair failed and we were unable to recover it. 00:29:43.280 [2024-04-26 15:10:28.988340] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.280 [2024-04-26 15:10:28.988456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.280 [2024-04-26 15:10:28.988480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.280 [2024-04-26 15:10:28.988495] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.280 [2024-04-26 15:10:28.988507] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.280 [2024-04-26 15:10:28.988536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.280 qpair failed and we were unable to recover it. 00:29:43.280 [2024-04-26 15:10:28.998422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.280 [2024-04-26 15:10:28.998558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.280 [2024-04-26 15:10:28.998583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.280 [2024-04-26 15:10:28.998598] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.280 [2024-04-26 15:10:28.998610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.280 [2024-04-26 15:10:28.998639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.280 qpair failed and we were unable to recover it. 00:29:43.280 [2024-04-26 15:10:29.008399] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.280 [2024-04-26 15:10:29.008512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.280 [2024-04-26 15:10:29.008536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.280 [2024-04-26 15:10:29.008550] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.280 [2024-04-26 15:10:29.008563] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.280 [2024-04-26 15:10:29.008592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.280 qpair failed and we were unable to recover it. 00:29:43.538 [2024-04-26 15:10:29.018449] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.539 [2024-04-26 15:10:29.018550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.539 [2024-04-26 15:10:29.018576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.539 [2024-04-26 15:10:29.018592] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.539 [2024-04-26 15:10:29.018604] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.539 [2024-04-26 15:10:29.018634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.539 qpair failed and we were unable to recover it. 00:29:43.539 [2024-04-26 15:10:29.028464] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.539 [2024-04-26 15:10:29.028564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.539 [2024-04-26 15:10:29.028591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.539 [2024-04-26 15:10:29.028605] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.539 [2024-04-26 15:10:29.028617] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.539 [2024-04-26 15:10:29.028646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.539 qpair failed and we were unable to recover it. 00:29:43.539 [2024-04-26 15:10:29.038516] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.539 [2024-04-26 15:10:29.038647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.539 [2024-04-26 15:10:29.038672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.539 [2024-04-26 15:10:29.038687] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.539 [2024-04-26 15:10:29.038699] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.539 [2024-04-26 15:10:29.038737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.539 qpair failed and we were unable to recover it. 00:29:43.539 [2024-04-26 15:10:29.048509] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.539 [2024-04-26 15:10:29.048611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.539 [2024-04-26 15:10:29.048635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.539 [2024-04-26 15:10:29.048662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.539 [2024-04-26 15:10:29.048676] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.539 [2024-04-26 15:10:29.048704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.539 qpair failed and we were unable to recover it. 00:29:43.539 [2024-04-26 15:10:29.058562] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.539 [2024-04-26 15:10:29.058670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.539 [2024-04-26 15:10:29.058694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.539 [2024-04-26 15:10:29.058708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.539 [2024-04-26 15:10:29.058721] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.539 [2024-04-26 15:10:29.058759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.539 qpair failed and we were unable to recover it. 00:29:43.539 [2024-04-26 15:10:29.068575] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.539 [2024-04-26 15:10:29.068672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.539 [2024-04-26 15:10:29.068696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.539 [2024-04-26 15:10:29.068710] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.539 [2024-04-26 15:10:29.068722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.539 [2024-04-26 15:10:29.068750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.539 qpair failed and we were unable to recover it. 00:29:43.539 [2024-04-26 15:10:29.078596] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.539 [2024-04-26 15:10:29.078704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.539 [2024-04-26 15:10:29.078728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.539 [2024-04-26 15:10:29.078742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.539 [2024-04-26 15:10:29.078754] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.539 [2024-04-26 15:10:29.078783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.539 qpair failed and we were unable to recover it. 00:29:43.539 [2024-04-26 15:10:29.088639] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.539 [2024-04-26 15:10:29.088749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.539 [2024-04-26 15:10:29.088775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.539 [2024-04-26 15:10:29.088789] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.539 [2024-04-26 15:10:29.088801] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.539 [2024-04-26 15:10:29.088829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.539 qpair failed and we were unable to recover it. 00:29:43.539 [2024-04-26 15:10:29.098657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.539 [2024-04-26 15:10:29.098756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.539 [2024-04-26 15:10:29.098781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.539 [2024-04-26 15:10:29.098795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.539 [2024-04-26 15:10:29.098807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.539 [2024-04-26 15:10:29.098835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.539 qpair failed and we were unable to recover it. 00:29:43.539 [2024-04-26 15:10:29.108657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.539 [2024-04-26 15:10:29.108773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.539 [2024-04-26 15:10:29.108799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.539 [2024-04-26 15:10:29.108813] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.539 [2024-04-26 15:10:29.108825] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.539 [2024-04-26 15:10:29.108853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.539 qpair failed and we were unable to recover it. 00:29:43.539 [2024-04-26 15:10:29.118681] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.539 [2024-04-26 15:10:29.118795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.539 [2024-04-26 15:10:29.118819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.539 [2024-04-26 15:10:29.118834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.539 [2024-04-26 15:10:29.118846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.539 [2024-04-26 15:10:29.118873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.539 qpair failed and we were unable to recover it. 00:29:43.539 [2024-04-26 15:10:29.128707] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.539 [2024-04-26 15:10:29.128810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.539 [2024-04-26 15:10:29.128834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.539 [2024-04-26 15:10:29.128848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.539 [2024-04-26 15:10:29.128860] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.539 [2024-04-26 15:10:29.128889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.539 qpair failed and we were unable to recover it. 00:29:43.539 [2024-04-26 15:10:29.138756] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.539 [2024-04-26 15:10:29.138853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.539 [2024-04-26 15:10:29.138877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.539 [2024-04-26 15:10:29.138896] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.539 [2024-04-26 15:10:29.138910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.539 [2024-04-26 15:10:29.138938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.539 qpair failed and we were unable to recover it. 00:29:43.539 [2024-04-26 15:10:29.148767] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.539 [2024-04-26 15:10:29.148897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.539 [2024-04-26 15:10:29.148923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.539 [2024-04-26 15:10:29.148938] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.539 [2024-04-26 15:10:29.148950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.539 [2024-04-26 15:10:29.148978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.539 qpair failed and we were unable to recover it. 00:29:43.539 [2024-04-26 15:10:29.158797] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.540 [2024-04-26 15:10:29.158894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.540 [2024-04-26 15:10:29.158918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.540 [2024-04-26 15:10:29.158933] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.540 [2024-04-26 15:10:29.158945] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.540 [2024-04-26 15:10:29.158973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.540 qpair failed and we were unable to recover it. 00:29:43.540 [2024-04-26 15:10:29.168834] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.540 [2024-04-26 15:10:29.168941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.540 [2024-04-26 15:10:29.168965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.540 [2024-04-26 15:10:29.168979] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.540 [2024-04-26 15:10:29.168993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.540 [2024-04-26 15:10:29.169045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.540 qpair failed and we were unable to recover it. 00:29:43.540 [2024-04-26 15:10:29.178895] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.540 [2024-04-26 15:10:29.179017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.540 [2024-04-26 15:10:29.179054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.540 [2024-04-26 15:10:29.179070] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.540 [2024-04-26 15:10:29.179083] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.540 [2024-04-26 15:10:29.179113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.540 qpair failed and we were unable to recover it. 00:29:43.540 [2024-04-26 15:10:29.188913] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.540 [2024-04-26 15:10:29.189037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.540 [2024-04-26 15:10:29.189065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.540 [2024-04-26 15:10:29.189080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.540 [2024-04-26 15:10:29.189093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.540 [2024-04-26 15:10:29.189132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.540 qpair failed and we were unable to recover it. 00:29:43.540 [2024-04-26 15:10:29.198941] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.540 [2024-04-26 15:10:29.199089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.540 [2024-04-26 15:10:29.199116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.540 [2024-04-26 15:10:29.199132] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.540 [2024-04-26 15:10:29.199145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.540 [2024-04-26 15:10:29.199174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.540 qpair failed and we were unable to recover it. 00:29:43.540 [2024-04-26 15:10:29.208988] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.540 [2024-04-26 15:10:29.209116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.540 [2024-04-26 15:10:29.209141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.540 [2024-04-26 15:10:29.209156] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.540 [2024-04-26 15:10:29.209169] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.540 [2024-04-26 15:10:29.209199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.540 qpair failed and we were unable to recover it. 00:29:43.540 [2024-04-26 15:10:29.219029] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.540 [2024-04-26 15:10:29.219137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.540 [2024-04-26 15:10:29.219163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.540 [2024-04-26 15:10:29.219178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.540 [2024-04-26 15:10:29.219191] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.540 [2024-04-26 15:10:29.219220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.540 qpair failed and we were unable to recover it. 00:29:43.540 [2024-04-26 15:10:29.229052] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.540 [2024-04-26 15:10:29.229166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.540 [2024-04-26 15:10:29.229193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.540 [2024-04-26 15:10:29.229213] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.540 [2024-04-26 15:10:29.229232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.540 [2024-04-26 15:10:29.229261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.540 qpair failed and we were unable to recover it. 00:29:43.540 [2024-04-26 15:10:29.239082] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.540 [2024-04-26 15:10:29.239187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.540 [2024-04-26 15:10:29.239213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.540 [2024-04-26 15:10:29.239228] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.540 [2024-04-26 15:10:29.239242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.540 [2024-04-26 15:10:29.239271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.540 qpair failed and we were unable to recover it. 00:29:43.540 [2024-04-26 15:10:29.249098] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.540 [2024-04-26 15:10:29.249230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.540 [2024-04-26 15:10:29.249256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.540 [2024-04-26 15:10:29.249271] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.540 [2024-04-26 15:10:29.249284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.540 [2024-04-26 15:10:29.249313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.540 qpair failed and we were unable to recover it. 00:29:43.540 [2024-04-26 15:10:29.259139] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.540 [2024-04-26 15:10:29.259271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.540 [2024-04-26 15:10:29.259296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.540 [2024-04-26 15:10:29.259325] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.540 [2024-04-26 15:10:29.259338] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.540 [2024-04-26 15:10:29.259367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.540 qpair failed and we were unable to recover it. 00:29:43.540 [2024-04-26 15:10:29.269158] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.540 [2024-04-26 15:10:29.269269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.540 [2024-04-26 15:10:29.269297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.540 [2024-04-26 15:10:29.269312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.540 [2024-04-26 15:10:29.269325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.540 [2024-04-26 15:10:29.269377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.540 qpair failed and we were unable to recover it. 00:29:43.799 [2024-04-26 15:10:29.279230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.799 [2024-04-26 15:10:29.279368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.800 [2024-04-26 15:10:29.279395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.800 [2024-04-26 15:10:29.279410] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.800 [2024-04-26 15:10:29.279422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.800 [2024-04-26 15:10:29.279450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.800 qpair failed and we were unable to recover it. 00:29:43.800 [2024-04-26 15:10:29.289280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.800 [2024-04-26 15:10:29.289467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.800 [2024-04-26 15:10:29.289493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.800 [2024-04-26 15:10:29.289508] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.800 [2024-04-26 15:10:29.289522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.800 [2024-04-26 15:10:29.289551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.800 qpair failed and we were unable to recover it. 00:29:43.800 [2024-04-26 15:10:29.299266] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.800 [2024-04-26 15:10:29.299394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.800 [2024-04-26 15:10:29.299418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.800 [2024-04-26 15:10:29.299432] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.800 [2024-04-26 15:10:29.299445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.800 [2024-04-26 15:10:29.299473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.800 qpair failed and we were unable to recover it. 00:29:43.800 [2024-04-26 15:10:29.309249] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.800 [2024-04-26 15:10:29.309354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.800 [2024-04-26 15:10:29.309394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.800 [2024-04-26 15:10:29.309409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.800 [2024-04-26 15:10:29.309421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.800 [2024-04-26 15:10:29.309450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.800 qpair failed and we were unable to recover it. 00:29:43.800 [2024-04-26 15:10:29.319295] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.800 [2024-04-26 15:10:29.319411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.800 [2024-04-26 15:10:29.319442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.800 [2024-04-26 15:10:29.319457] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.800 [2024-04-26 15:10:29.319469] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.800 [2024-04-26 15:10:29.319497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.800 qpair failed and we were unable to recover it. 00:29:43.800 [2024-04-26 15:10:29.329374] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.800 [2024-04-26 15:10:29.329484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.800 [2024-04-26 15:10:29.329511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.800 [2024-04-26 15:10:29.329526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.800 [2024-04-26 15:10:29.329538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.800 [2024-04-26 15:10:29.329573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.800 qpair failed and we were unable to recover it. 00:29:43.800 [2024-04-26 15:10:29.339317] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.800 [2024-04-26 15:10:29.339467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.800 [2024-04-26 15:10:29.339492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.800 [2024-04-26 15:10:29.339507] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.800 [2024-04-26 15:10:29.339520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.800 [2024-04-26 15:10:29.339549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.800 qpair failed and we were unable to recover it. 00:29:43.800 [2024-04-26 15:10:29.349352] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.800 [2024-04-26 15:10:29.349445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.800 [2024-04-26 15:10:29.349469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.800 [2024-04-26 15:10:29.349483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.800 [2024-04-26 15:10:29.349496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.800 [2024-04-26 15:10:29.349524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.800 qpair failed and we were unable to recover it. 00:29:43.800 [2024-04-26 15:10:29.359424] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.800 [2024-04-26 15:10:29.359527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.800 [2024-04-26 15:10:29.359550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.800 [2024-04-26 15:10:29.359565] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.800 [2024-04-26 15:10:29.359591] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.800 [2024-04-26 15:10:29.359626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.800 qpair failed and we were unable to recover it. 00:29:43.800 [2024-04-26 15:10:29.369456] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.800 [2024-04-26 15:10:29.369561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.800 [2024-04-26 15:10:29.369585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.800 [2024-04-26 15:10:29.369599] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.800 [2024-04-26 15:10:29.369611] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.800 [2024-04-26 15:10:29.369639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.800 qpair failed and we were unable to recover it. 00:29:43.800 [2024-04-26 15:10:29.379496] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.800 [2024-04-26 15:10:29.379615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.800 [2024-04-26 15:10:29.379639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.800 [2024-04-26 15:10:29.379654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.800 [2024-04-26 15:10:29.379666] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.800 [2024-04-26 15:10:29.379693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.800 qpair failed and we were unable to recover it. 00:29:43.800 [2024-04-26 15:10:29.389474] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.800 [2024-04-26 15:10:29.389578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.800 [2024-04-26 15:10:29.389602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.800 [2024-04-26 15:10:29.389615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.800 [2024-04-26 15:10:29.389628] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.800 [2024-04-26 15:10:29.389657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.800 qpair failed and we were unable to recover it. 00:29:43.800 [2024-04-26 15:10:29.399526] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.800 [2024-04-26 15:10:29.399632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.800 [2024-04-26 15:10:29.399656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.800 [2024-04-26 15:10:29.399670] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.800 [2024-04-26 15:10:29.399683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.800 [2024-04-26 15:10:29.399711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.800 qpair failed and we were unable to recover it. 00:29:43.800 [2024-04-26 15:10:29.409568] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.800 [2024-04-26 15:10:29.409672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.800 [2024-04-26 15:10:29.409701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.800 [2024-04-26 15:10:29.409717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.800 [2024-04-26 15:10:29.409730] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.801 [2024-04-26 15:10:29.409758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.801 qpair failed and we were unable to recover it. 00:29:43.801 [2024-04-26 15:10:29.419591] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.801 [2024-04-26 15:10:29.419696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.801 [2024-04-26 15:10:29.419720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.801 [2024-04-26 15:10:29.419734] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.801 [2024-04-26 15:10:29.419747] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.801 [2024-04-26 15:10:29.419775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.801 qpair failed and we were unable to recover it. 00:29:43.801 [2024-04-26 15:10:29.429640] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.801 [2024-04-26 15:10:29.429738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.801 [2024-04-26 15:10:29.429763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.801 [2024-04-26 15:10:29.429777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.801 [2024-04-26 15:10:29.429789] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.801 [2024-04-26 15:10:29.429818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.801 qpair failed and we were unable to recover it. 00:29:43.801 [2024-04-26 15:10:29.439634] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.801 [2024-04-26 15:10:29.439733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.801 [2024-04-26 15:10:29.439757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.801 [2024-04-26 15:10:29.439772] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.801 [2024-04-26 15:10:29.439784] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.801 [2024-04-26 15:10:29.439814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.801 qpair failed and we were unable to recover it. 00:29:43.801 [2024-04-26 15:10:29.449673] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.801 [2024-04-26 15:10:29.449805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.801 [2024-04-26 15:10:29.449830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.801 [2024-04-26 15:10:29.449844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.801 [2024-04-26 15:10:29.449856] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.801 [2024-04-26 15:10:29.449890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.801 qpair failed and we were unable to recover it. 00:29:43.801 [2024-04-26 15:10:29.459712] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.801 [2024-04-26 15:10:29.459849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.801 [2024-04-26 15:10:29.459874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.801 [2024-04-26 15:10:29.459889] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.801 [2024-04-26 15:10:29.459902] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.801 [2024-04-26 15:10:29.459932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.801 qpair failed and we were unable to recover it. 00:29:43.801 [2024-04-26 15:10:29.469717] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.801 [2024-04-26 15:10:29.469821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.801 [2024-04-26 15:10:29.469845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.801 [2024-04-26 15:10:29.469859] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.801 [2024-04-26 15:10:29.469871] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.801 [2024-04-26 15:10:29.469899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.801 qpair failed and we were unable to recover it. 00:29:43.801 [2024-04-26 15:10:29.479753] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.801 [2024-04-26 15:10:29.479848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.801 [2024-04-26 15:10:29.479872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.801 [2024-04-26 15:10:29.479886] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.801 [2024-04-26 15:10:29.479898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.801 [2024-04-26 15:10:29.479926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.801 qpair failed and we were unable to recover it. 00:29:43.801 [2024-04-26 15:10:29.489805] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.801 [2024-04-26 15:10:29.489929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.801 [2024-04-26 15:10:29.489953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.801 [2024-04-26 15:10:29.489968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.801 [2024-04-26 15:10:29.489981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.801 [2024-04-26 15:10:29.490030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.801 qpair failed and we were unable to recover it. 00:29:43.801 [2024-04-26 15:10:29.499825] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.801 [2024-04-26 15:10:29.499974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.801 [2024-04-26 15:10:29.500026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.801 [2024-04-26 15:10:29.500044] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.801 [2024-04-26 15:10:29.500057] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.801 [2024-04-26 15:10:29.500086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.801 qpair failed and we were unable to recover it. 00:29:43.801 [2024-04-26 15:10:29.509841] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.801 [2024-04-26 15:10:29.509943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.801 [2024-04-26 15:10:29.509967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.801 [2024-04-26 15:10:29.509982] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.801 [2024-04-26 15:10:29.509995] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.801 [2024-04-26 15:10:29.510048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.801 qpair failed and we were unable to recover it. 00:29:43.801 [2024-04-26 15:10:29.519901] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.801 [2024-04-26 15:10:29.520015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.801 [2024-04-26 15:10:29.520047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.801 [2024-04-26 15:10:29.520063] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.801 [2024-04-26 15:10:29.520075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.801 [2024-04-26 15:10:29.520105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.801 qpair failed and we were unable to recover it. 00:29:43.801 [2024-04-26 15:10:29.529881] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.801 [2024-04-26 15:10:29.529987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.801 [2024-04-26 15:10:29.530035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.801 [2024-04-26 15:10:29.530051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.801 [2024-04-26 15:10:29.530063] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:43.801 [2024-04-26 15:10:29.530092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.801 qpair failed and we were unable to recover it. 00:29:44.060 [2024-04-26 15:10:29.539946] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.540091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.540117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.540131] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.540144] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.540180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.549909] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.550028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.550054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.550069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.550082] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.550111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.559968] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.560069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.560094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.560109] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.560122] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.560150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.569975] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.570105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.570132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.570148] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.570160] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.570190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.580030] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.580138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.580165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.580180] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.580193] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.580222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.590028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.590129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.590161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.590177] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.590190] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.590220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.600103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.600203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.600231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.600246] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.600259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.600298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.610147] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.610288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.610315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.610337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.610364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.610393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.620163] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.620269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.620299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.620315] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.620328] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.620372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.630139] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.630240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.630267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.630293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.630326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.630355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.640192] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.640315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.640341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.640356] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.640369] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.640397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.650248] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.650385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.650411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.650425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.650438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.650466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.660249] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.660401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.660427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.660442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.660454] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.660483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.670346] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.670472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.670497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.670511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.670523] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.670552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.680364] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.680465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.680490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.680505] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.680529] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.680557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.690422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.690530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.690555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.690569] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.690582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.690610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.700419] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.700522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.700546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.700560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.700573] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.700602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.710400] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.710564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.710590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.710605] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.710618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.710646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.720446] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.720581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.720607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.720621] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.720638] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.720667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.730505] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.730629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.730653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.730667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.730679] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.730708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.740505] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.740608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.740632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.061 [2024-04-26 15:10:29.740647] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.061 [2024-04-26 15:10:29.740660] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.061 [2024-04-26 15:10:29.740688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.061 qpair failed and we were unable to recover it. 00:29:44.061 [2024-04-26 15:10:29.750549] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.061 [2024-04-26 15:10:29.750670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.061 [2024-04-26 15:10:29.750696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.062 [2024-04-26 15:10:29.750711] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.062 [2024-04-26 15:10:29.750723] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.062 [2024-04-26 15:10:29.750751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.062 qpair failed and we were unable to recover it. 00:29:44.062 [2024-04-26 15:10:29.760571] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.062 [2024-04-26 15:10:29.760675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.062 [2024-04-26 15:10:29.760699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.062 [2024-04-26 15:10:29.760714] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.062 [2024-04-26 15:10:29.760727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.062 [2024-04-26 15:10:29.760755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.062 qpair failed and we were unable to recover it. 00:29:44.062 [2024-04-26 15:10:29.770602] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.062 [2024-04-26 15:10:29.770716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.062 [2024-04-26 15:10:29.770741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.062 [2024-04-26 15:10:29.770755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.062 [2024-04-26 15:10:29.770768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.062 [2024-04-26 15:10:29.770796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.062 qpair failed and we were unable to recover it. 00:29:44.062 [2024-04-26 15:10:29.780586] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.062 [2024-04-26 15:10:29.780690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.062 [2024-04-26 15:10:29.780714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.062 [2024-04-26 15:10:29.780728] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.062 [2024-04-26 15:10:29.780740] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.062 [2024-04-26 15:10:29.780769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.062 qpair failed and we were unable to recover it. 00:29:44.062 [2024-04-26 15:10:29.790613] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.062 [2024-04-26 15:10:29.790717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.062 [2024-04-26 15:10:29.790741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.062 [2024-04-26 15:10:29.790755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.062 [2024-04-26 15:10:29.790767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.062 [2024-04-26 15:10:29.790796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.062 qpair failed and we were unable to recover it. 00:29:44.321 [2024-04-26 15:10:29.800666] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.321 [2024-04-26 15:10:29.800791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.321 [2024-04-26 15:10:29.800817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.321 [2024-04-26 15:10:29.800832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.321 [2024-04-26 15:10:29.800845] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.321 [2024-04-26 15:10:29.800888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.321 qpair failed and we were unable to recover it. 00:29:44.321 [2024-04-26 15:10:29.810709] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.321 [2024-04-26 15:10:29.810831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.321 [2024-04-26 15:10:29.810856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.321 [2024-04-26 15:10:29.810870] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.321 [2024-04-26 15:10:29.810887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.321 [2024-04-26 15:10:29.810916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.321 qpair failed and we were unable to recover it. 00:29:44.321 [2024-04-26 15:10:29.820734] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.321 [2024-04-26 15:10:29.820833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.321 [2024-04-26 15:10:29.820857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.321 [2024-04-26 15:10:29.820872] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.321 [2024-04-26 15:10:29.820884] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.321 [2024-04-26 15:10:29.820913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.321 qpair failed and we were unable to recover it. 00:29:44.321 [2024-04-26 15:10:29.830788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.321 [2024-04-26 15:10:29.830921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.321 [2024-04-26 15:10:29.830946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.321 [2024-04-26 15:10:29.830961] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.321 [2024-04-26 15:10:29.830973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.321 [2024-04-26 15:10:29.831017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.321 qpair failed and we were unable to recover it. 00:29:44.321 [2024-04-26 15:10:29.840818] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.321 [2024-04-26 15:10:29.840957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.321 [2024-04-26 15:10:29.840982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.321 [2024-04-26 15:10:29.840996] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.321 [2024-04-26 15:10:29.841035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.321 [2024-04-26 15:10:29.841073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.321 qpair failed and we were unable to recover it. 00:29:44.321 [2024-04-26 15:10:29.850817] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.321 [2024-04-26 15:10:29.850921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.321 [2024-04-26 15:10:29.850946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.321 [2024-04-26 15:10:29.850959] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.321 [2024-04-26 15:10:29.850972] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.321 [2024-04-26 15:10:29.851015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.321 qpair failed and we were unable to recover it. 00:29:44.321 [2024-04-26 15:10:29.860879] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.321 [2024-04-26 15:10:29.861040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.321 [2024-04-26 15:10:29.861066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.321 [2024-04-26 15:10:29.861080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.321 [2024-04-26 15:10:29.861093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.321 [2024-04-26 15:10:29.861123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.321 qpair failed and we were unable to recover it. 00:29:44.321 [2024-04-26 15:10:29.870876] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.321 [2024-04-26 15:10:29.870977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.321 [2024-04-26 15:10:29.871015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.321 [2024-04-26 15:10:29.871039] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.321 [2024-04-26 15:10:29.871053] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.321 [2024-04-26 15:10:29.871084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.321 qpair failed and we were unable to recover it. 00:29:44.321 [2024-04-26 15:10:29.880896] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.321 [2024-04-26 15:10:29.881015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.321 [2024-04-26 15:10:29.881046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.321 [2024-04-26 15:10:29.881062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.321 [2024-04-26 15:10:29.881074] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.322 [2024-04-26 15:10:29.881104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.322 qpair failed and we were unable to recover it. 00:29:44.322 [2024-04-26 15:10:29.890955] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.322 [2024-04-26 15:10:29.891082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.322 [2024-04-26 15:10:29.891108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.322 [2024-04-26 15:10:29.891124] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.322 [2024-04-26 15:10:29.891137] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.322 [2024-04-26 15:10:29.891165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.322 qpair failed and we were unable to recover it. 00:29:44.322 [2024-04-26 15:10:29.900937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.322 [2024-04-26 15:10:29.901093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.322 [2024-04-26 15:10:29.901118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.322 [2024-04-26 15:10:29.901138] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.322 [2024-04-26 15:10:29.901152] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.322 [2024-04-26 15:10:29.901182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.322 qpair failed and we were unable to recover it. 00:29:44.322 [2024-04-26 15:10:29.910993] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.322 [2024-04-26 15:10:29.911117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.322 [2024-04-26 15:10:29.911142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.322 [2024-04-26 15:10:29.911157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.322 [2024-04-26 15:10:29.911170] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.322 [2024-04-26 15:10:29.911200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.322 qpair failed and we were unable to recover it. 00:29:44.322 [2024-04-26 15:10:29.921028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.322 [2024-04-26 15:10:29.921133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.322 [2024-04-26 15:10:29.921158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.322 [2024-04-26 15:10:29.921172] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.322 [2024-04-26 15:10:29.921185] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.322 [2024-04-26 15:10:29.921215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.322 qpair failed and we were unable to recover it. 00:29:44.322 [2024-04-26 15:10:29.931084] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.322 [2024-04-26 15:10:29.931205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.322 [2024-04-26 15:10:29.931230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.322 [2024-04-26 15:10:29.931245] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.322 [2024-04-26 15:10:29.931258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.322 [2024-04-26 15:10:29.931287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.322 qpair failed and we were unable to recover it. 00:29:44.322 [2024-04-26 15:10:29.941083] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.322 [2024-04-26 15:10:29.941191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.322 [2024-04-26 15:10:29.941217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.322 [2024-04-26 15:10:29.941231] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.322 [2024-04-26 15:10:29.941244] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.322 [2024-04-26 15:10:29.941273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.322 qpair failed and we were unable to recover it. 00:29:44.322 [2024-04-26 15:10:29.951081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.322 [2024-04-26 15:10:29.951184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.322 [2024-04-26 15:10:29.951211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.322 [2024-04-26 15:10:29.951226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.322 [2024-04-26 15:10:29.951239] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.322 [2024-04-26 15:10:29.951268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.322 qpair failed and we were unable to recover it. 00:29:44.322 [2024-04-26 15:10:29.961228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.322 [2024-04-26 15:10:29.961421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.322 [2024-04-26 15:10:29.961448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.322 [2024-04-26 15:10:29.961463] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.322 [2024-04-26 15:10:29.961476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.322 [2024-04-26 15:10:29.961505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.322 qpair failed and we were unable to recover it. 00:29:44.322 [2024-04-26 15:10:29.971147] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.322 [2024-04-26 15:10:29.971262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.322 [2024-04-26 15:10:29.971290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.322 [2024-04-26 15:10:29.971305] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.322 [2024-04-26 15:10:29.971318] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.322 [2024-04-26 15:10:29.971346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.322 qpair failed and we were unable to recover it. 00:29:44.322 [2024-04-26 15:10:29.981198] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.322 [2024-04-26 15:10:29.981316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.322 [2024-04-26 15:10:29.981342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.322 [2024-04-26 15:10:29.981357] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.322 [2024-04-26 15:10:29.981368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.322 [2024-04-26 15:10:29.981396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.322 qpair failed and we were unable to recover it. 00:29:44.322 [2024-04-26 15:10:29.991282] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.322 [2024-04-26 15:10:29.991398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.322 [2024-04-26 15:10:29.991424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.322 [2024-04-26 15:10:29.991447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.322 [2024-04-26 15:10:29.991461] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.322 [2024-04-26 15:10:29.991489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.322 qpair failed and we were unable to recover it. 00:29:44.322 [2024-04-26 15:10:30.001248] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.322 [2024-04-26 15:10:30.001403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.322 [2024-04-26 15:10:30.001430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.322 [2024-04-26 15:10:30.001446] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.322 [2024-04-26 15:10:30.001458] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.322 [2024-04-26 15:10:30.001487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.322 qpair failed and we were unable to recover it. 00:29:44.322 [2024-04-26 15:10:30.011271] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.322 [2024-04-26 15:10:30.011397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.322 [2024-04-26 15:10:30.011424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.322 [2024-04-26 15:10:30.011440] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.322 [2024-04-26 15:10:30.011453] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.322 [2024-04-26 15:10:30.011481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.322 qpair failed and we were unable to recover it. 00:29:44.322 [2024-04-26 15:10:30.021300] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.322 [2024-04-26 15:10:30.021430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.323 [2024-04-26 15:10:30.021457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.323 [2024-04-26 15:10:30.021472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.323 [2024-04-26 15:10:30.021500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.323 [2024-04-26 15:10:30.021531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.323 qpair failed and we were unable to recover it. 00:29:44.323 [2024-04-26 15:10:30.031332] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.323 [2024-04-26 15:10:30.031458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.323 [2024-04-26 15:10:30.031487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.323 [2024-04-26 15:10:30.031502] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.323 [2024-04-26 15:10:30.031515] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.323 [2024-04-26 15:10:30.031544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.323 qpair failed and we were unable to recover it. 00:29:44.323 [2024-04-26 15:10:30.041331] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.323 [2024-04-26 15:10:30.041457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.323 [2024-04-26 15:10:30.041485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.323 [2024-04-26 15:10:30.041500] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.323 [2024-04-26 15:10:30.041512] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.323 [2024-04-26 15:10:30.041542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.323 qpair failed and we were unable to recover it. 00:29:44.323 [2024-04-26 15:10:30.051411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.323 [2024-04-26 15:10:30.051515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.323 [2024-04-26 15:10:30.051540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.323 [2024-04-26 15:10:30.051553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.323 [2024-04-26 15:10:30.051566] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.323 [2024-04-26 15:10:30.051594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.323 qpair failed and we were unable to recover it. 00:29:44.582 [2024-04-26 15:10:30.061413] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.582 [2024-04-26 15:10:30.061533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.582 [2024-04-26 15:10:30.061559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.582 [2024-04-26 15:10:30.061573] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.582 [2024-04-26 15:10:30.061586] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.582 [2024-04-26 15:10:30.061614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.582 qpair failed and we were unable to recover it. 00:29:44.582 [2024-04-26 15:10:30.071490] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.582 [2024-04-26 15:10:30.071594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.582 [2024-04-26 15:10:30.071621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.582 [2024-04-26 15:10:30.071636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.582 [2024-04-26 15:10:30.071649] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.582 [2024-04-26 15:10:30.071678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.582 qpair failed and we were unable to recover it. 00:29:44.582 [2024-04-26 15:10:30.081500] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.582 [2024-04-26 15:10:30.081621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.582 [2024-04-26 15:10:30.081647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.582 [2024-04-26 15:10:30.081667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.582 [2024-04-26 15:10:30.081680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.582 [2024-04-26 15:10:30.081715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.582 qpair failed and we were unable to recover it. 00:29:44.582 [2024-04-26 15:10:30.091542] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.582 [2024-04-26 15:10:30.091670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.582 [2024-04-26 15:10:30.091696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.582 [2024-04-26 15:10:30.091711] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.582 [2024-04-26 15:10:30.091729] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.582 [2024-04-26 15:10:30.091757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.582 qpair failed and we were unable to recover it. 00:29:44.582 [2024-04-26 15:10:30.101561] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.582 [2024-04-26 15:10:30.101675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.582 [2024-04-26 15:10:30.101701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.582 [2024-04-26 15:10:30.101716] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.582 [2024-04-26 15:10:30.101728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.582 [2024-04-26 15:10:30.101756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.582 qpair failed and we were unable to recover it. 00:29:44.582 [2024-04-26 15:10:30.111581] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.582 [2024-04-26 15:10:30.111730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.582 [2024-04-26 15:10:30.111757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.582 [2024-04-26 15:10:30.111772] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.582 [2024-04-26 15:10:30.111784] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.582 [2024-04-26 15:10:30.111839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.582 qpair failed and we were unable to recover it. 00:29:44.582 [2024-04-26 15:10:30.121612] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.582 [2024-04-26 15:10:30.121712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.582 [2024-04-26 15:10:30.121738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.582 [2024-04-26 15:10:30.121753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.582 [2024-04-26 15:10:30.121764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.582 [2024-04-26 15:10:30.121793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.582 qpair failed and we were unable to recover it. 00:29:44.582 [2024-04-26 15:10:30.131696] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.583 [2024-04-26 15:10:30.131835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.583 [2024-04-26 15:10:30.131862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.583 [2024-04-26 15:10:30.131883] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.583 [2024-04-26 15:10:30.131895] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.583 [2024-04-26 15:10:30.131923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.583 qpair failed and we were unable to recover it. 00:29:44.583 [2024-04-26 15:10:30.141657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.583 [2024-04-26 15:10:30.141763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.583 [2024-04-26 15:10:30.141789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.583 [2024-04-26 15:10:30.141804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.583 [2024-04-26 15:10:30.141816] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.583 [2024-04-26 15:10:30.141845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.583 qpair failed and we were unable to recover it. 00:29:44.583 [2024-04-26 15:10:30.151643] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.583 [2024-04-26 15:10:30.151753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.583 [2024-04-26 15:10:30.151778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.583 [2024-04-26 15:10:30.151793] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.583 [2024-04-26 15:10:30.151807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.583 [2024-04-26 15:10:30.151836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.583 qpair failed and we were unable to recover it. 00:29:44.583 [2024-04-26 15:10:30.161716] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.583 [2024-04-26 15:10:30.161819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.583 [2024-04-26 15:10:30.161846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.583 [2024-04-26 15:10:30.161861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.583 [2024-04-26 15:10:30.161874] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.583 [2024-04-26 15:10:30.161903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.583 qpair failed and we were unable to recover it. 00:29:44.583 [2024-04-26 15:10:30.171726] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.583 [2024-04-26 15:10:30.171828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.583 [2024-04-26 15:10:30.171854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.583 [2024-04-26 15:10:30.171874] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.583 [2024-04-26 15:10:30.171887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.583 [2024-04-26 15:10:30.171915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.583 qpair failed and we were unable to recover it. 00:29:44.583 [2024-04-26 15:10:30.181786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.583 [2024-04-26 15:10:30.181905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.583 [2024-04-26 15:10:30.181931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.583 [2024-04-26 15:10:30.181946] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.583 [2024-04-26 15:10:30.181958] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.583 [2024-04-26 15:10:30.181986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.583 qpair failed and we were unable to recover it. 00:29:44.583 [2024-04-26 15:10:30.191748] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.583 [2024-04-26 15:10:30.191897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.583 [2024-04-26 15:10:30.191937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.583 [2024-04-26 15:10:30.191953] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.583 [2024-04-26 15:10:30.191965] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.583 [2024-04-26 15:10:30.192000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.583 qpair failed and we were unable to recover it. 00:29:44.583 [2024-04-26 15:10:30.201858] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.583 [2024-04-26 15:10:30.201961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.583 [2024-04-26 15:10:30.201988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.583 [2024-04-26 15:10:30.202003] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.583 [2024-04-26 15:10:30.202015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.583 [2024-04-26 15:10:30.202052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.583 qpair failed and we were unable to recover it. 00:29:44.583 [2024-04-26 15:10:30.211870] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.583 [2024-04-26 15:10:30.211977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.583 [2024-04-26 15:10:30.212017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.583 [2024-04-26 15:10:30.212042] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.583 [2024-04-26 15:10:30.212055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.583 [2024-04-26 15:10:30.212085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.583 qpair failed and we were unable to recover it. 00:29:44.583 [2024-04-26 15:10:30.221858] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.583 [2024-04-26 15:10:30.221955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.583 [2024-04-26 15:10:30.221981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.583 [2024-04-26 15:10:30.221997] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.583 [2024-04-26 15:10:30.222033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.583 [2024-04-26 15:10:30.222063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.583 qpair failed and we were unable to recover it. 00:29:44.583 [2024-04-26 15:10:30.231957] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.583 [2024-04-26 15:10:30.232100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.583 [2024-04-26 15:10:30.232126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.583 [2024-04-26 15:10:30.232141] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.583 [2024-04-26 15:10:30.232154] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.583 [2024-04-26 15:10:30.232184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.583 qpair failed and we were unable to recover it. 00:29:44.583 [2024-04-26 15:10:30.241912] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.583 [2024-04-26 15:10:30.242031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.583 [2024-04-26 15:10:30.242058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.583 [2024-04-26 15:10:30.242073] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.583 [2024-04-26 15:10:30.242085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.583 [2024-04-26 15:10:30.242114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.583 qpair failed and we were unable to recover it. 00:29:44.583 [2024-04-26 15:10:30.252212] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.583 [2024-04-26 15:10:30.252362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.583 [2024-04-26 15:10:30.252388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.583 [2024-04-26 15:10:30.252403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.583 [2024-04-26 15:10:30.252415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.583 [2024-04-26 15:10:30.252443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.583 qpair failed and we were unable to recover it. 00:29:44.583 [2024-04-26 15:10:30.262092] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.583 [2024-04-26 15:10:30.262194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.583 [2024-04-26 15:10:30.262225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.583 [2024-04-26 15:10:30.262241] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.584 [2024-04-26 15:10:30.262253] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.584 [2024-04-26 15:10:30.262283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.584 qpair failed and we were unable to recover it. 00:29:44.584 [2024-04-26 15:10:30.272073] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.584 [2024-04-26 15:10:30.272174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.584 [2024-04-26 15:10:30.272200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.584 [2024-04-26 15:10:30.272215] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.584 [2024-04-26 15:10:30.272228] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.584 [2024-04-26 15:10:30.272258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.584 qpair failed and we were unable to recover it. 00:29:44.584 [2024-04-26 15:10:30.282085] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.584 [2024-04-26 15:10:30.282190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.584 [2024-04-26 15:10:30.282217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.584 [2024-04-26 15:10:30.282232] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.584 [2024-04-26 15:10:30.282245] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.584 [2024-04-26 15:10:30.282274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.584 qpair failed and we were unable to recover it. 00:29:44.584 [2024-04-26 15:10:30.292090] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.584 [2024-04-26 15:10:30.292196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.584 [2024-04-26 15:10:30.292222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.584 [2024-04-26 15:10:30.292237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.584 [2024-04-26 15:10:30.292250] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.584 [2024-04-26 15:10:30.292278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.584 qpair failed and we were unable to recover it. 00:29:44.584 [2024-04-26 15:10:30.302093] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.584 [2024-04-26 15:10:30.302194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.584 [2024-04-26 15:10:30.302219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.584 [2024-04-26 15:10:30.302233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.584 [2024-04-26 15:10:30.302246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.584 [2024-04-26 15:10:30.302276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.584 qpair failed and we were unable to recover it. 00:29:44.584 [2024-04-26 15:10:30.312139] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.584 [2024-04-26 15:10:30.312242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.584 [2024-04-26 15:10:30.312268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.584 [2024-04-26 15:10:30.312283] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.584 [2024-04-26 15:10:30.312311] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.584 [2024-04-26 15:10:30.312341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.584 qpair failed and we were unable to recover it. 00:29:44.843 [2024-04-26 15:10:30.322175] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.843 [2024-04-26 15:10:30.322408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.843 [2024-04-26 15:10:30.322434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.843 [2024-04-26 15:10:30.322449] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.843 [2024-04-26 15:10:30.322462] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.843 [2024-04-26 15:10:30.322499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.843 qpair failed and we were unable to recover it. 00:29:44.843 [2024-04-26 15:10:30.332303] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.843 [2024-04-26 15:10:30.332435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.843 [2024-04-26 15:10:30.332458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.843 [2024-04-26 15:10:30.332472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.843 [2024-04-26 15:10:30.332484] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.843 [2024-04-26 15:10:30.332512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.843 qpair failed and we were unable to recover it. 00:29:44.843 [2024-04-26 15:10:30.342205] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.843 [2024-04-26 15:10:30.342336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.843 [2024-04-26 15:10:30.342359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.844 [2024-04-26 15:10:30.342373] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.844 [2024-04-26 15:10:30.342386] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.844 [2024-04-26 15:10:30.342414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.844 qpair failed and we were unable to recover it. 00:29:44.844 [2024-04-26 15:10:30.352316] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.844 [2024-04-26 15:10:30.352434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.844 [2024-04-26 15:10:30.352475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.844 [2024-04-26 15:10:30.352490] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.844 [2024-04-26 15:10:30.352502] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.844 [2024-04-26 15:10:30.352543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.844 qpair failed and we were unable to recover it. 00:29:44.844 [2024-04-26 15:10:30.362264] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.844 [2024-04-26 15:10:30.362372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.844 [2024-04-26 15:10:30.362396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.844 [2024-04-26 15:10:30.362410] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.844 [2024-04-26 15:10:30.362422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.844 [2024-04-26 15:10:30.362451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.844 qpair failed and we were unable to recover it. 00:29:44.844 [2024-04-26 15:10:30.372349] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.844 [2024-04-26 15:10:30.372491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.844 [2024-04-26 15:10:30.372517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.844 [2024-04-26 15:10:30.372532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.844 [2024-04-26 15:10:30.372545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.844 [2024-04-26 15:10:30.372585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.844 qpair failed and we were unable to recover it. 00:29:44.844 [2024-04-26 15:10:30.382321] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.844 [2024-04-26 15:10:30.382487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.844 [2024-04-26 15:10:30.382512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.844 [2024-04-26 15:10:30.382527] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.844 [2024-04-26 15:10:30.382539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.844 [2024-04-26 15:10:30.382574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.844 qpair failed and we were unable to recover it. 00:29:44.844 [2024-04-26 15:10:30.392360] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.844 [2024-04-26 15:10:30.392479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.844 [2024-04-26 15:10:30.392506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.844 [2024-04-26 15:10:30.392520] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.844 [2024-04-26 15:10:30.392532] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.844 [2024-04-26 15:10:30.392565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.844 qpair failed and we were unable to recover it. 00:29:44.844 [2024-04-26 15:10:30.402373] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.844 [2024-04-26 15:10:30.402476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.844 [2024-04-26 15:10:30.402502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.844 [2024-04-26 15:10:30.402517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.844 [2024-04-26 15:10:30.402529] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.844 [2024-04-26 15:10:30.402557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.844 qpair failed and we were unable to recover it. 00:29:44.844 [2024-04-26 15:10:30.412446] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.844 [2024-04-26 15:10:30.412560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.844 [2024-04-26 15:10:30.412587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.844 [2024-04-26 15:10:30.412603] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.844 [2024-04-26 15:10:30.412615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.844 [2024-04-26 15:10:30.412644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.844 qpair failed and we were unable to recover it. 00:29:44.844 [2024-04-26 15:10:30.422467] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.844 [2024-04-26 15:10:30.422587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.844 [2024-04-26 15:10:30.422613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.844 [2024-04-26 15:10:30.422628] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.844 [2024-04-26 15:10:30.422640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.844 [2024-04-26 15:10:30.422668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.844 qpair failed and we were unable to recover it. 00:29:44.844 [2024-04-26 15:10:30.432501] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.844 [2024-04-26 15:10:30.432650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.844 [2024-04-26 15:10:30.432675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.844 [2024-04-26 15:10:30.432690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.844 [2024-04-26 15:10:30.432703] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.844 [2024-04-26 15:10:30.432735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.844 qpair failed and we were unable to recover it. 00:29:44.844 [2024-04-26 15:10:30.442512] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.844 [2024-04-26 15:10:30.442662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.844 [2024-04-26 15:10:30.442693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.844 [2024-04-26 15:10:30.442708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.844 [2024-04-26 15:10:30.442735] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.844 [2024-04-26 15:10:30.442764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.844 qpair failed and we were unable to recover it. 00:29:44.844 [2024-04-26 15:10:30.452553] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.844 [2024-04-26 15:10:30.452668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.844 [2024-04-26 15:10:30.452696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.844 [2024-04-26 15:10:30.452711] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.844 [2024-04-26 15:10:30.452723] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.844 [2024-04-26 15:10:30.452762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.844 qpair failed and we were unable to recover it. 00:29:44.844 [2024-04-26 15:10:30.462648] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.844 [2024-04-26 15:10:30.462772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.844 [2024-04-26 15:10:30.462797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.844 [2024-04-26 15:10:30.462812] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.844 [2024-04-26 15:10:30.462824] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.844 [2024-04-26 15:10:30.462852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.844 qpair failed and we were unable to recover it. 00:29:44.844 [2024-04-26 15:10:30.472611] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.844 [2024-04-26 15:10:30.472730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.844 [2024-04-26 15:10:30.472757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.844 [2024-04-26 15:10:30.472771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.844 [2024-04-26 15:10:30.472783] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.844 [2024-04-26 15:10:30.472811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.844 qpair failed and we were unable to recover it. 00:29:44.844 [2024-04-26 15:10:30.482635] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.844 [2024-04-26 15:10:30.482740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.844 [2024-04-26 15:10:30.482766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.844 [2024-04-26 15:10:30.482780] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.844 [2024-04-26 15:10:30.482792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.844 [2024-04-26 15:10:30.482826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.844 qpair failed and we were unable to recover it. 00:29:44.844 [2024-04-26 15:10:30.492673] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.845 [2024-04-26 15:10:30.492777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.845 [2024-04-26 15:10:30.492803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.845 [2024-04-26 15:10:30.492818] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.845 [2024-04-26 15:10:30.492830] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.845 [2024-04-26 15:10:30.492858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.845 qpair failed and we were unable to recover it. 00:29:44.845 [2024-04-26 15:10:30.502678] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.845 [2024-04-26 15:10:30.502806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.845 [2024-04-26 15:10:30.502831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.845 [2024-04-26 15:10:30.502846] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.845 [2024-04-26 15:10:30.502858] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.845 [2024-04-26 15:10:30.502886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.845 qpair failed and we were unable to recover it. 00:29:44.845 [2024-04-26 15:10:30.512713] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.845 [2024-04-26 15:10:30.512816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.845 [2024-04-26 15:10:30.512842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.845 [2024-04-26 15:10:30.512857] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.845 [2024-04-26 15:10:30.512869] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.845 [2024-04-26 15:10:30.512897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.845 qpair failed and we were unable to recover it. 00:29:44.845 [2024-04-26 15:10:30.522736] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.845 [2024-04-26 15:10:30.522840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.845 [2024-04-26 15:10:30.522866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.845 [2024-04-26 15:10:30.522881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.845 [2024-04-26 15:10:30.522893] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.845 [2024-04-26 15:10:30.522921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.845 qpair failed and we were unable to recover it. 00:29:44.845 [2024-04-26 15:10:30.532779] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.845 [2024-04-26 15:10:30.532924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.845 [2024-04-26 15:10:30.532955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.845 [2024-04-26 15:10:30.532970] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.845 [2024-04-26 15:10:30.532982] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.845 [2024-04-26 15:10:30.533035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.845 qpair failed and we were unable to recover it. 00:29:44.845 [2024-04-26 15:10:30.542786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.845 [2024-04-26 15:10:30.542926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.845 [2024-04-26 15:10:30.542952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.845 [2024-04-26 15:10:30.542967] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.845 [2024-04-26 15:10:30.542979] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.845 [2024-04-26 15:10:30.543036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.845 qpair failed and we were unable to recover it. 00:29:44.845 [2024-04-26 15:10:30.552815] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.845 [2024-04-26 15:10:30.552922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.845 [2024-04-26 15:10:30.552946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.845 [2024-04-26 15:10:30.552960] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.845 [2024-04-26 15:10:30.552973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.845 [2024-04-26 15:10:30.553016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.845 qpair failed and we were unable to recover it. 00:29:44.845 [2024-04-26 15:10:30.562814] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.845 [2024-04-26 15:10:30.562909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.845 [2024-04-26 15:10:30.562935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.845 [2024-04-26 15:10:30.562950] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.845 [2024-04-26 15:10:30.562963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.845 [2024-04-26 15:10:30.562991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.845 qpair failed and we were unable to recover it. 00:29:44.845 [2024-04-26 15:10:30.572872] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.845 [2024-04-26 15:10:30.573059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.845 [2024-04-26 15:10:30.573086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.845 [2024-04-26 15:10:30.573101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.845 [2024-04-26 15:10:30.573118] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:44.845 [2024-04-26 15:10:30.573147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.845 qpair failed and we were unable to recover it. 00:29:45.104 [2024-04-26 15:10:30.582973] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.104 [2024-04-26 15:10:30.583086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.104 [2024-04-26 15:10:30.583113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.104 [2024-04-26 15:10:30.583128] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.104 [2024-04-26 15:10:30.583140] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.104 [2024-04-26 15:10:30.583170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.104 qpair failed and we were unable to recover it. 00:29:45.104 [2024-04-26 15:10:30.592884] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.104 [2024-04-26 15:10:30.593014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.104 [2024-04-26 15:10:30.593049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.104 [2024-04-26 15:10:30.593065] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.104 [2024-04-26 15:10:30.593077] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.104 [2024-04-26 15:10:30.593107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.104 qpair failed and we were unable to recover it. 00:29:45.104 [2024-04-26 15:10:30.602947] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.104 [2024-04-26 15:10:30.603088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.104 [2024-04-26 15:10:30.603115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.104 [2024-04-26 15:10:30.603130] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.104 [2024-04-26 15:10:30.603143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.104 [2024-04-26 15:10:30.603182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.104 qpair failed and we were unable to recover it. 00:29:45.104 [2024-04-26 15:10:30.612988] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.104 [2024-04-26 15:10:30.613119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.104 [2024-04-26 15:10:30.613146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.104 [2024-04-26 15:10:30.613162] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.104 [2024-04-26 15:10:30.613174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.104 [2024-04-26 15:10:30.613215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.104 qpair failed and we were unable to recover it. 00:29:45.104 [2024-04-26 15:10:30.623038] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.104 [2024-04-26 15:10:30.623172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.104 [2024-04-26 15:10:30.623203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.104 [2024-04-26 15:10:30.623231] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.104 [2024-04-26 15:10:30.623243] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.104 [2024-04-26 15:10:30.623272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.104 qpair failed and we were unable to recover it. 00:29:45.105 [2024-04-26 15:10:30.633059] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.105 [2024-04-26 15:10:30.633177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.105 [2024-04-26 15:10:30.633204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.105 [2024-04-26 15:10:30.633219] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.105 [2024-04-26 15:10:30.633231] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.105 [2024-04-26 15:10:30.633272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.105 qpair failed and we were unable to recover it. 00:29:45.105 [2024-04-26 15:10:30.643081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.105 [2024-04-26 15:10:30.643192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.105 [2024-04-26 15:10:30.643219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.105 [2024-04-26 15:10:30.643234] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.105 [2024-04-26 15:10:30.643246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.105 [2024-04-26 15:10:30.643287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.105 qpair failed and we were unable to recover it. 00:29:45.105 [2024-04-26 15:10:30.653099] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.105 [2024-04-26 15:10:30.653214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.105 [2024-04-26 15:10:30.653241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.105 [2024-04-26 15:10:30.653256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.105 [2024-04-26 15:10:30.653269] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.105 [2024-04-26 15:10:30.653298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.105 qpair failed and we were unable to recover it. 00:29:45.105 [2024-04-26 15:10:30.663117] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.105 [2024-04-26 15:10:30.663239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.105 [2024-04-26 15:10:30.663265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.105 [2024-04-26 15:10:30.663281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.105 [2024-04-26 15:10:30.663316] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.105 [2024-04-26 15:10:30.663356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.105 qpair failed and we were unable to recover it. 00:29:45.105 [2024-04-26 15:10:30.673132] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.105 [2024-04-26 15:10:30.673244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.105 [2024-04-26 15:10:30.673268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.105 [2024-04-26 15:10:30.673282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.105 [2024-04-26 15:10:30.673296] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.105 [2024-04-26 15:10:30.673340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.105 qpair failed and we were unable to recover it. 00:29:45.105 [2024-04-26 15:10:30.683191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.105 [2024-04-26 15:10:30.683315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.105 [2024-04-26 15:10:30.683341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.105 [2024-04-26 15:10:30.683355] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.105 [2024-04-26 15:10:30.683367] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.105 [2024-04-26 15:10:30.683405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.105 qpair failed and we were unable to recover it. 00:29:45.105 [2024-04-26 15:10:30.693227] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.105 [2024-04-26 15:10:30.693374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.105 [2024-04-26 15:10:30.693400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.105 [2024-04-26 15:10:30.693415] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.105 [2024-04-26 15:10:30.693427] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.105 [2024-04-26 15:10:30.693455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.105 qpair failed and we were unable to recover it. 00:29:45.105 [2024-04-26 15:10:30.703255] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.105 [2024-04-26 15:10:30.703368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.105 [2024-04-26 15:10:30.703394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.105 [2024-04-26 15:10:30.703409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.105 [2024-04-26 15:10:30.703421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.105 [2024-04-26 15:10:30.703450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.105 qpair failed and we were unable to recover it. 00:29:45.105 [2024-04-26 15:10:30.713269] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.105 [2024-04-26 15:10:30.713436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.105 [2024-04-26 15:10:30.713463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.105 [2024-04-26 15:10:30.713478] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.105 [2024-04-26 15:10:30.713500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.105 [2024-04-26 15:10:30.713528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.105 qpair failed and we were unable to recover it. 00:29:45.105 [2024-04-26 15:10:30.723272] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.105 [2024-04-26 15:10:30.723383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.105 [2024-04-26 15:10:30.723408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.105 [2024-04-26 15:10:30.723423] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.105 [2024-04-26 15:10:30.723436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.105 [2024-04-26 15:10:30.723465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.105 qpair failed and we were unable to recover it. 00:29:45.105 [2024-04-26 15:10:30.733344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.105 [2024-04-26 15:10:30.733449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.105 [2024-04-26 15:10:30.733475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.105 [2024-04-26 15:10:30.733490] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.105 [2024-04-26 15:10:30.733502] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.105 [2024-04-26 15:10:30.733530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.105 qpair failed and we were unable to recover it. 00:29:45.105 [2024-04-26 15:10:30.743349] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.105 [2024-04-26 15:10:30.743452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.105 [2024-04-26 15:10:30.743477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.105 [2024-04-26 15:10:30.743492] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.105 [2024-04-26 15:10:30.743504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.105 [2024-04-26 15:10:30.743532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.105 qpair failed and we were unable to recover it. 00:29:45.105 [2024-04-26 15:10:30.753403] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.105 [2024-04-26 15:10:30.753502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.105 [2024-04-26 15:10:30.753528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.105 [2024-04-26 15:10:30.753543] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.105 [2024-04-26 15:10:30.753560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.105 [2024-04-26 15:10:30.753589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.105 qpair failed and we were unable to recover it. 00:29:45.105 [2024-04-26 15:10:30.763483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.105 [2024-04-26 15:10:30.763586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.105 [2024-04-26 15:10:30.763611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.106 [2024-04-26 15:10:30.763626] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.106 [2024-04-26 15:10:30.763638] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.106 [2024-04-26 15:10:30.763667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.106 qpair failed and we were unable to recover it. 00:29:45.106 [2024-04-26 15:10:30.773499] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.106 [2024-04-26 15:10:30.773645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.106 [2024-04-26 15:10:30.773671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.106 [2024-04-26 15:10:30.773685] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.106 [2024-04-26 15:10:30.773698] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.106 [2024-04-26 15:10:30.773736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.106 qpair failed and we were unable to recover it. 00:29:45.106 [2024-04-26 15:10:30.783533] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.106 [2024-04-26 15:10:30.783648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.106 [2024-04-26 15:10:30.783674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.106 [2024-04-26 15:10:30.783688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.106 [2024-04-26 15:10:30.783700] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.106 [2024-04-26 15:10:30.783728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.106 qpair failed and we were unable to recover it. 00:29:45.106 [2024-04-26 15:10:30.793534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.106 [2024-04-26 15:10:30.793635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.106 [2024-04-26 15:10:30.793662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.106 [2024-04-26 15:10:30.793677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.106 [2024-04-26 15:10:30.793689] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.106 [2024-04-26 15:10:30.793717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.106 qpair failed and we were unable to recover it. 00:29:45.106 [2024-04-26 15:10:30.803613] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.106 [2024-04-26 15:10:30.803724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.106 [2024-04-26 15:10:30.803749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.106 [2024-04-26 15:10:30.803762] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.106 [2024-04-26 15:10:30.803775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.106 [2024-04-26 15:10:30.803804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.106 qpair failed and we were unable to recover it. 00:29:45.106 [2024-04-26 15:10:30.813566] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.106 [2024-04-26 15:10:30.813670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.106 [2024-04-26 15:10:30.813695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.106 [2024-04-26 15:10:30.813710] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.106 [2024-04-26 15:10:30.813722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.106 [2024-04-26 15:10:30.813750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.106 qpair failed and we were unable to recover it. 00:29:45.106 [2024-04-26 15:10:30.823638] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.106 [2024-04-26 15:10:30.823744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.106 [2024-04-26 15:10:30.823770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.106 [2024-04-26 15:10:30.823784] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.106 [2024-04-26 15:10:30.823797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.106 [2024-04-26 15:10:30.823825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.106 qpair failed and we were unable to recover it. 00:29:45.106 [2024-04-26 15:10:30.833637] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.106 [2024-04-26 15:10:30.833735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.106 [2024-04-26 15:10:30.833761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.106 [2024-04-26 15:10:30.833776] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.106 [2024-04-26 15:10:30.833788] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.106 [2024-04-26 15:10:30.833817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.106 qpair failed and we were unable to recover it. 00:29:45.365 [2024-04-26 15:10:30.843657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.365 [2024-04-26 15:10:30.843762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.365 [2024-04-26 15:10:30.843788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.365 [2024-04-26 15:10:30.843803] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.365 [2024-04-26 15:10:30.843820] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.365 [2024-04-26 15:10:30.843850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.365 qpair failed and we were unable to recover it. 00:29:45.365 [2024-04-26 15:10:30.853683] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.365 [2024-04-26 15:10:30.853791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.365 [2024-04-26 15:10:30.853817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.365 [2024-04-26 15:10:30.853832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.365 [2024-04-26 15:10:30.853844] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.365 [2024-04-26 15:10:30.853872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.365 qpair failed and we were unable to recover it. 00:29:45.365 [2024-04-26 15:10:30.863711] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.365 [2024-04-26 15:10:30.863825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.365 [2024-04-26 15:10:30.863852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.365 [2024-04-26 15:10:30.863866] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.365 [2024-04-26 15:10:30.863878] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.365 [2024-04-26 15:10:30.863906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.365 qpair failed and we were unable to recover it. 00:29:45.365 [2024-04-26 15:10:30.873816] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.365 [2024-04-26 15:10:30.873918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.365 [2024-04-26 15:10:30.873943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.365 [2024-04-26 15:10:30.873958] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.365 [2024-04-26 15:10:30.873970] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.365 [2024-04-26 15:10:30.873998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.365 qpair failed and we were unable to recover it. 00:29:45.365 [2024-04-26 15:10:30.883771] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.365 [2024-04-26 15:10:30.883875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.365 [2024-04-26 15:10:30.883900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.365 [2024-04-26 15:10:30.883914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.365 [2024-04-26 15:10:30.883926] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.365 [2024-04-26 15:10:30.883953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.365 qpair failed and we were unable to recover it. 00:29:45.365 [2024-04-26 15:10:30.893812] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.365 [2024-04-26 15:10:30.893944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.365 [2024-04-26 15:10:30.893970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.365 [2024-04-26 15:10:30.893985] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.365 [2024-04-26 15:10:30.893997] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.365 [2024-04-26 15:10:30.894063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.365 qpair failed and we were unable to recover it. 00:29:45.365 [2024-04-26 15:10:30.903847] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.365 [2024-04-26 15:10:30.903993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.365 [2024-04-26 15:10:30.904053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.365 [2024-04-26 15:10:30.904069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.365 [2024-04-26 15:10:30.904082] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.365 [2024-04-26 15:10:30.904111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.365 qpair failed and we were unable to recover it. 00:29:45.365 [2024-04-26 15:10:30.913924] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.365 [2024-04-26 15:10:30.914052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.366 [2024-04-26 15:10:30.914078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.366 [2024-04-26 15:10:30.914093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.366 [2024-04-26 15:10:30.914106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.366 [2024-04-26 15:10:30.914144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.366 qpair failed and we were unable to recover it. 00:29:45.366 [2024-04-26 15:10:30.923853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.366 [2024-04-26 15:10:30.923971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.366 [2024-04-26 15:10:30.924009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.366 [2024-04-26 15:10:30.924032] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.366 [2024-04-26 15:10:30.924046] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.366 [2024-04-26 15:10:30.924076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.366 qpair failed and we were unable to recover it. 00:29:45.366 [2024-04-26 15:10:30.933933] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.366 [2024-04-26 15:10:30.934069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.366 [2024-04-26 15:10:30.934095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.366 [2024-04-26 15:10:30.934116] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.366 [2024-04-26 15:10:30.934129] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.366 [2024-04-26 15:10:30.934164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.366 qpair failed and we were unable to recover it. 00:29:45.366 [2024-04-26 15:10:30.943951] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.366 [2024-04-26 15:10:30.944079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.366 [2024-04-26 15:10:30.944106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.366 [2024-04-26 15:10:30.944121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.366 [2024-04-26 15:10:30.944133] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.366 [2024-04-26 15:10:30.944164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.366 qpair failed and we were unable to recover it. 00:29:45.366 [2024-04-26 15:10:30.953977] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.366 [2024-04-26 15:10:30.954101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.366 [2024-04-26 15:10:30.954128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.366 [2024-04-26 15:10:30.954143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.366 [2024-04-26 15:10:30.954156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.366 [2024-04-26 15:10:30.954184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.366 qpair failed and we were unable to recover it. 00:29:45.366 [2024-04-26 15:10:30.964013] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.366 [2024-04-26 15:10:30.964125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.366 [2024-04-26 15:10:30.964151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.366 [2024-04-26 15:10:30.964166] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.366 [2024-04-26 15:10:30.964178] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.366 [2024-04-26 15:10:30.964206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.366 qpair failed and we were unable to recover it. 00:29:45.366 [2024-04-26 15:10:30.974068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.366 [2024-04-26 15:10:30.974229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.366 [2024-04-26 15:10:30.974265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.366 [2024-04-26 15:10:30.974280] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.366 [2024-04-26 15:10:30.974292] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.366 [2024-04-26 15:10:30.974345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.366 qpair failed and we were unable to recover it. 00:29:45.366 [2024-04-26 15:10:30.984087] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.366 [2024-04-26 15:10:30.984192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.366 [2024-04-26 15:10:30.984219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.366 [2024-04-26 15:10:30.984235] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.366 [2024-04-26 15:10:30.984247] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.366 [2024-04-26 15:10:30.984286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.366 qpair failed and we were unable to recover it. 00:29:45.366 [2024-04-26 15:10:30.994101] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.366 [2024-04-26 15:10:30.994207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.366 [2024-04-26 15:10:30.994232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.366 [2024-04-26 15:10:30.994246] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.366 [2024-04-26 15:10:30.994258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.366 [2024-04-26 15:10:30.994287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.366 qpair failed and we were unable to recover it. 00:29:45.366 [2024-04-26 15:10:31.004102] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.366 [2024-04-26 15:10:31.004210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.366 [2024-04-26 15:10:31.004236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.366 [2024-04-26 15:10:31.004250] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.366 [2024-04-26 15:10:31.004263] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.366 [2024-04-26 15:10:31.004292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.366 qpair failed and we were unable to recover it. 00:29:45.366 [2024-04-26 15:10:31.014224] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.366 [2024-04-26 15:10:31.014356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.366 [2024-04-26 15:10:31.014381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.366 [2024-04-26 15:10:31.014396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.366 [2024-04-26 15:10:31.014408] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.366 [2024-04-26 15:10:31.014436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.366 qpair failed and we were unable to recover it. 00:29:45.366 [2024-04-26 15:10:31.024207] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.366 [2024-04-26 15:10:31.024365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.366 [2024-04-26 15:10:31.024389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.366 [2024-04-26 15:10:31.024409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.366 [2024-04-26 15:10:31.024421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.366 [2024-04-26 15:10:31.024450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.366 qpair failed and we were unable to recover it. 00:29:45.366 [2024-04-26 15:10:31.034190] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.366 [2024-04-26 15:10:31.034293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.366 [2024-04-26 15:10:31.034318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.366 [2024-04-26 15:10:31.034333] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.366 [2024-04-26 15:10:31.034345] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.367 [2024-04-26 15:10:31.034388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.367 qpair failed and we were unable to recover it. 00:29:45.367 [2024-04-26 15:10:31.044341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.367 [2024-04-26 15:10:31.044449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.367 [2024-04-26 15:10:31.044473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.367 [2024-04-26 15:10:31.044487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.367 [2024-04-26 15:10:31.044499] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.367 [2024-04-26 15:10:31.044527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.367 qpair failed and we were unable to recover it. 00:29:45.367 [2024-04-26 15:10:31.054268] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.367 [2024-04-26 15:10:31.054389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.367 [2024-04-26 15:10:31.054413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.367 [2024-04-26 15:10:31.054427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.367 [2024-04-26 15:10:31.054440] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.367 [2024-04-26 15:10:31.054470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.367 qpair failed and we were unable to recover it. 00:29:45.367 [2024-04-26 15:10:31.064313] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.367 [2024-04-26 15:10:31.064419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.367 [2024-04-26 15:10:31.064443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.367 [2024-04-26 15:10:31.064458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.367 [2024-04-26 15:10:31.064470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.367 [2024-04-26 15:10:31.064498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.367 qpair failed and we were unable to recover it. 00:29:45.367 [2024-04-26 15:10:31.074392] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.367 [2024-04-26 15:10:31.074528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.367 [2024-04-26 15:10:31.074555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.367 [2024-04-26 15:10:31.074570] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.367 [2024-04-26 15:10:31.074582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.367 [2024-04-26 15:10:31.074611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.367 qpair failed and we were unable to recover it. 00:29:45.367 [2024-04-26 15:10:31.084379] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.367 [2024-04-26 15:10:31.084473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.367 [2024-04-26 15:10:31.084497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.367 [2024-04-26 15:10:31.084510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.367 [2024-04-26 15:10:31.084523] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.367 [2024-04-26 15:10:31.084551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.367 qpair failed and we were unable to recover it. 00:29:45.367 [2024-04-26 15:10:31.094402] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.367 [2024-04-26 15:10:31.094504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.367 [2024-04-26 15:10:31.094528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.367 [2024-04-26 15:10:31.094542] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.367 [2024-04-26 15:10:31.094555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.367 [2024-04-26 15:10:31.094583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.367 qpair failed and we were unable to recover it. 00:29:45.627 [2024-04-26 15:10:31.104440] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.627 [2024-04-26 15:10:31.104573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.627 [2024-04-26 15:10:31.104600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.627 [2024-04-26 15:10:31.104615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.627 [2024-04-26 15:10:31.104628] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.627 [2024-04-26 15:10:31.104658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.627 qpair failed and we were unable to recover it. 00:29:45.627 [2024-04-26 15:10:31.114472] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.627 [2024-04-26 15:10:31.114577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.627 [2024-04-26 15:10:31.114601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.627 [2024-04-26 15:10:31.114620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.627 [2024-04-26 15:10:31.114633] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.627 [2024-04-26 15:10:31.114661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.627 qpair failed and we were unable to recover it. 00:29:45.627 [2024-04-26 15:10:31.124480] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.627 [2024-04-26 15:10:31.124583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.627 [2024-04-26 15:10:31.124608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.627 [2024-04-26 15:10:31.124622] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.627 [2024-04-26 15:10:31.124636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.627 [2024-04-26 15:10:31.124664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.627 qpair failed and we were unable to recover it. 00:29:45.627 [2024-04-26 15:10:31.134531] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.627 [2024-04-26 15:10:31.134630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.627 [2024-04-26 15:10:31.134655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.627 [2024-04-26 15:10:31.134669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.627 [2024-04-26 15:10:31.134683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.627 [2024-04-26 15:10:31.134710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.627 qpair failed and we were unable to recover it. 00:29:45.627 [2024-04-26 15:10:31.144588] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.627 [2024-04-26 15:10:31.144720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.627 [2024-04-26 15:10:31.144745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.627 [2024-04-26 15:10:31.144759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.627 [2024-04-26 15:10:31.144772] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.627 [2024-04-26 15:10:31.144800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.627 qpair failed and we were unable to recover it. 00:29:45.627 [2024-04-26 15:10:31.154606] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.627 [2024-04-26 15:10:31.154709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.627 [2024-04-26 15:10:31.154734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.627 [2024-04-26 15:10:31.154747] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.627 [2024-04-26 15:10:31.154760] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.627 [2024-04-26 15:10:31.154788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.627 qpair failed and we were unable to recover it. 00:29:45.627 [2024-04-26 15:10:31.164633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.627 [2024-04-26 15:10:31.164740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.627 [2024-04-26 15:10:31.164764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.627 [2024-04-26 15:10:31.164779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.627 [2024-04-26 15:10:31.164791] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.627 [2024-04-26 15:10:31.164820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.627 qpair failed and we were unable to recover it. 00:29:45.628 [2024-04-26 15:10:31.174628] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.628 [2024-04-26 15:10:31.174727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.628 [2024-04-26 15:10:31.174751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.628 [2024-04-26 15:10:31.174767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.628 [2024-04-26 15:10:31.174780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.628 [2024-04-26 15:10:31.174808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.628 qpair failed and we were unable to recover it. 00:29:45.628 [2024-04-26 15:10:31.184655] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.628 [2024-04-26 15:10:31.184756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.628 [2024-04-26 15:10:31.184780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.628 [2024-04-26 15:10:31.184795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.628 [2024-04-26 15:10:31.184808] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.628 [2024-04-26 15:10:31.184836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.628 qpair failed and we were unable to recover it. 00:29:45.628 [2024-04-26 15:10:31.194708] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.628 [2024-04-26 15:10:31.194834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.628 [2024-04-26 15:10:31.194858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.628 [2024-04-26 15:10:31.194872] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.628 [2024-04-26 15:10:31.194884] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.628 [2024-04-26 15:10:31.194912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.628 qpair failed and we were unable to recover it. 00:29:45.628 [2024-04-26 15:10:31.204689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.628 [2024-04-26 15:10:31.204793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.628 [2024-04-26 15:10:31.204822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.628 [2024-04-26 15:10:31.204837] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.628 [2024-04-26 15:10:31.204850] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.628 [2024-04-26 15:10:31.204878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.628 qpair failed and we were unable to recover it. 00:29:45.628 [2024-04-26 15:10:31.214711] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.628 [2024-04-26 15:10:31.214830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.628 [2024-04-26 15:10:31.214855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.628 [2024-04-26 15:10:31.214869] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.628 [2024-04-26 15:10:31.214882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.628 [2024-04-26 15:10:31.214910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.628 qpair failed and we were unable to recover it. 00:29:45.628 [2024-04-26 15:10:31.224795] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.628 [2024-04-26 15:10:31.224922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.628 [2024-04-26 15:10:31.224948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.628 [2024-04-26 15:10:31.224961] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.628 [2024-04-26 15:10:31.224974] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.628 [2024-04-26 15:10:31.225016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.628 qpair failed and we were unable to recover it. 00:29:45.628 [2024-04-26 15:10:31.234777] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.628 [2024-04-26 15:10:31.234911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.628 [2024-04-26 15:10:31.234936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.628 [2024-04-26 15:10:31.234951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.628 [2024-04-26 15:10:31.234963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.628 [2024-04-26 15:10:31.234990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.628 qpair failed and we were unable to recover it. 00:29:45.628 [2024-04-26 15:10:31.244774] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.628 [2024-04-26 15:10:31.244875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.628 [2024-04-26 15:10:31.244900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.628 [2024-04-26 15:10:31.244913] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.628 [2024-04-26 15:10:31.244926] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.628 [2024-04-26 15:10:31.244954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.628 qpair failed and we were unable to recover it. 00:29:45.628 [2024-04-26 15:10:31.254833] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.628 [2024-04-26 15:10:31.254960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.628 [2024-04-26 15:10:31.254984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.628 [2024-04-26 15:10:31.255013] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.628 [2024-04-26 15:10:31.255034] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.628 [2024-04-26 15:10:31.255074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.628 qpair failed and we were unable to recover it. 00:29:45.628 [2024-04-26 15:10:31.264855] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.628 [2024-04-26 15:10:31.264969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.628 [2024-04-26 15:10:31.265010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.628 [2024-04-26 15:10:31.265034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.628 [2024-04-26 15:10:31.265048] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.628 [2024-04-26 15:10:31.265078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.628 qpair failed and we were unable to recover it. 00:29:45.628 [2024-04-26 15:10:31.274894] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.628 [2024-04-26 15:10:31.275039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.628 [2024-04-26 15:10:31.275066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.628 [2024-04-26 15:10:31.275082] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.628 [2024-04-26 15:10:31.275094] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.628 [2024-04-26 15:10:31.275123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.628 qpair failed and we were unable to recover it. 00:29:45.628 [2024-04-26 15:10:31.284879] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.628 [2024-04-26 15:10:31.284977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.628 [2024-04-26 15:10:31.285016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.628 [2024-04-26 15:10:31.285042] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.628 [2024-04-26 15:10:31.285056] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.628 [2024-04-26 15:10:31.285086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.628 qpair failed and we were unable to recover it. 00:29:45.628 [2024-04-26 15:10:31.294974] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.629 [2024-04-26 15:10:31.295105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.629 [2024-04-26 15:10:31.295136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.629 [2024-04-26 15:10:31.295152] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.629 [2024-04-26 15:10:31.295165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.629 [2024-04-26 15:10:31.295194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.629 qpair failed and we were unable to recover it. 00:29:45.629 [2024-04-26 15:10:31.304980] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.629 [2024-04-26 15:10:31.305114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.629 [2024-04-26 15:10:31.305140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.629 [2024-04-26 15:10:31.305155] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.629 [2024-04-26 15:10:31.305168] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.629 [2024-04-26 15:10:31.305197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.629 qpair failed and we were unable to recover it. 00:29:45.629 [2024-04-26 15:10:31.314964] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.629 [2024-04-26 15:10:31.315082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.629 [2024-04-26 15:10:31.315108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.629 [2024-04-26 15:10:31.315122] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.629 [2024-04-26 15:10:31.315136] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.629 [2024-04-26 15:10:31.315164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.629 qpair failed and we were unable to recover it. 00:29:45.629 [2024-04-26 15:10:31.325096] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.629 [2024-04-26 15:10:31.325197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.629 [2024-04-26 15:10:31.325222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.629 [2024-04-26 15:10:31.325236] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.629 [2024-04-26 15:10:31.325249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.629 [2024-04-26 15:10:31.325278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.629 qpair failed and we were unable to recover it. 00:29:45.629 [2024-04-26 15:10:31.335157] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.629 [2024-04-26 15:10:31.335284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.629 [2024-04-26 15:10:31.335311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.629 [2024-04-26 15:10:31.335326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.629 [2024-04-26 15:10:31.335339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.629 [2024-04-26 15:10:31.335386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.629 qpair failed and we were unable to recover it. 00:29:45.629 [2024-04-26 15:10:31.345093] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.629 [2024-04-26 15:10:31.345211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.629 [2024-04-26 15:10:31.345237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.629 [2024-04-26 15:10:31.345253] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.629 [2024-04-26 15:10:31.345277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.629 [2024-04-26 15:10:31.345322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.629 qpair failed and we were unable to recover it. 00:29:45.629 [2024-04-26 15:10:31.355110] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.629 [2024-04-26 15:10:31.355218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.629 [2024-04-26 15:10:31.355244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.629 [2024-04-26 15:10:31.355259] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.629 [2024-04-26 15:10:31.355275] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.629 [2024-04-26 15:10:31.355319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.629 qpair failed and we were unable to recover it. 00:29:45.629 [2024-04-26 15:10:31.365241] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.629 [2024-04-26 15:10:31.365365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.629 [2024-04-26 15:10:31.365391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.629 [2024-04-26 15:10:31.365406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.629 [2024-04-26 15:10:31.365419] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.629 [2024-04-26 15:10:31.365448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.629 qpair failed and we were unable to recover it. 00:29:45.891 [2024-04-26 15:10:31.375166] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.891 [2024-04-26 15:10:31.375278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.891 [2024-04-26 15:10:31.375304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.891 [2024-04-26 15:10:31.375334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.891 [2024-04-26 15:10:31.375357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.891 [2024-04-26 15:10:31.375386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.891 qpair failed and we were unable to recover it. 00:29:45.891 [2024-04-26 15:10:31.385221] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.891 [2024-04-26 15:10:31.385344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.891 [2024-04-26 15:10:31.385375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.891 [2024-04-26 15:10:31.385390] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.891 [2024-04-26 15:10:31.385403] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.891 [2024-04-26 15:10:31.385431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.891 qpair failed and we were unable to recover it. 00:29:45.891 [2024-04-26 15:10:31.395223] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.891 [2024-04-26 15:10:31.395330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.891 [2024-04-26 15:10:31.395357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.891 [2024-04-26 15:10:31.395372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.891 [2024-04-26 15:10:31.395385] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.891 [2024-04-26 15:10:31.395414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.891 qpair failed and we were unable to recover it. 00:29:45.891 [2024-04-26 15:10:31.405244] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.891 [2024-04-26 15:10:31.405370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.891 [2024-04-26 15:10:31.405396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.891 [2024-04-26 15:10:31.405411] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.891 [2024-04-26 15:10:31.405424] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.891 [2024-04-26 15:10:31.405452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.891 qpair failed and we were unable to recover it. 00:29:45.891 [2024-04-26 15:10:31.415321] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.891 [2024-04-26 15:10:31.415447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.891 [2024-04-26 15:10:31.415473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.891 [2024-04-26 15:10:31.415488] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.891 [2024-04-26 15:10:31.415500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.891 [2024-04-26 15:10:31.415529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.891 qpair failed and we were unable to recover it. 00:29:45.891 [2024-04-26 15:10:31.425338] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.891 [2024-04-26 15:10:31.425441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.892 [2024-04-26 15:10:31.425465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.892 [2024-04-26 15:10:31.425480] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.892 [2024-04-26 15:10:31.425492] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.892 [2024-04-26 15:10:31.425531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.892 qpair failed and we were unable to recover it. 00:29:45.892 [2024-04-26 15:10:31.435431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.892 [2024-04-26 15:10:31.435540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.892 [2024-04-26 15:10:31.435564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.892 [2024-04-26 15:10:31.435578] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.892 [2024-04-26 15:10:31.435592] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.892 [2024-04-26 15:10:31.435620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.892 qpair failed and we were unable to recover it. 00:29:45.892 [2024-04-26 15:10:31.445414] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.892 [2024-04-26 15:10:31.445518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.892 [2024-04-26 15:10:31.445541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.892 [2024-04-26 15:10:31.445555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.892 [2024-04-26 15:10:31.445568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.892 [2024-04-26 15:10:31.445596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.892 qpair failed and we were unable to recover it. 00:29:45.892 [2024-04-26 15:10:31.455434] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.892 [2024-04-26 15:10:31.455580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.892 [2024-04-26 15:10:31.455606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.892 [2024-04-26 15:10:31.455620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.892 [2024-04-26 15:10:31.455632] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.892 [2024-04-26 15:10:31.455660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.892 qpair failed and we were unable to recover it. 00:29:45.892 [2024-04-26 15:10:31.465488] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.892 [2024-04-26 15:10:31.465590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.892 [2024-04-26 15:10:31.465614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.892 [2024-04-26 15:10:31.465628] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.892 [2024-04-26 15:10:31.465640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.892 [2024-04-26 15:10:31.465669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.892 qpair failed and we were unable to recover it. 00:29:45.892 [2024-04-26 15:10:31.475501] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.892 [2024-04-26 15:10:31.475601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.892 [2024-04-26 15:10:31.475630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.892 [2024-04-26 15:10:31.475646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.892 [2024-04-26 15:10:31.475659] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.892 [2024-04-26 15:10:31.475689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.892 qpair failed and we were unable to recover it. 00:29:45.892 [2024-04-26 15:10:31.485570] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.892 [2024-04-26 15:10:31.485668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.892 [2024-04-26 15:10:31.485692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.892 [2024-04-26 15:10:31.485706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.892 [2024-04-26 15:10:31.485719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.892 [2024-04-26 15:10:31.485747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.892 qpair failed and we were unable to recover it. 00:29:45.892 [2024-04-26 15:10:31.495553] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.892 [2024-04-26 15:10:31.495689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.892 [2024-04-26 15:10:31.495715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.892 [2024-04-26 15:10:31.495730] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.892 [2024-04-26 15:10:31.495742] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.893 [2024-04-26 15:10:31.495770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.893 qpair failed and we were unable to recover it. 00:29:45.893 [2024-04-26 15:10:31.505596] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.893 [2024-04-26 15:10:31.505734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.893 [2024-04-26 15:10:31.505758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.893 [2024-04-26 15:10:31.505772] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.893 [2024-04-26 15:10:31.505784] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.893 [2024-04-26 15:10:31.505813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.893 qpair failed and we were unable to recover it. 00:29:45.893 [2024-04-26 15:10:31.515600] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.893 [2024-04-26 15:10:31.515713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.893 [2024-04-26 15:10:31.515738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.893 [2024-04-26 15:10:31.515752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.893 [2024-04-26 15:10:31.515765] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.893 [2024-04-26 15:10:31.515797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.893 qpair failed and we were unable to recover it. 00:29:45.893 [2024-04-26 15:10:31.525637] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.893 [2024-04-26 15:10:31.525734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.893 [2024-04-26 15:10:31.525758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.893 [2024-04-26 15:10:31.525772] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.893 [2024-04-26 15:10:31.525784] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.893 [2024-04-26 15:10:31.525814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.893 qpair failed and we were unable to recover it. 00:29:45.893 [2024-04-26 15:10:31.535863] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.893 [2024-04-26 15:10:31.535979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.893 [2024-04-26 15:10:31.536026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.893 [2024-04-26 15:10:31.536050] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.893 [2024-04-26 15:10:31.536065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.893 [2024-04-26 15:10:31.536095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.893 qpair failed and we were unable to recover it. 00:29:45.893 [2024-04-26 15:10:31.545677] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.893 [2024-04-26 15:10:31.545841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.893 [2024-04-26 15:10:31.545878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.893 [2024-04-26 15:10:31.545893] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.893 [2024-04-26 15:10:31.545906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.893 [2024-04-26 15:10:31.545935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.893 qpair failed and we were unable to recover it. 00:29:45.893 [2024-04-26 15:10:31.555722] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.893 [2024-04-26 15:10:31.555850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.893 [2024-04-26 15:10:31.555874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.893 [2024-04-26 15:10:31.555888] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.893 [2024-04-26 15:10:31.555900] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.893 [2024-04-26 15:10:31.555927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.893 qpair failed and we were unable to recover it. 00:29:45.893 [2024-04-26 15:10:31.565748] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.893 [2024-04-26 15:10:31.565865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.893 [2024-04-26 15:10:31.565895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.893 [2024-04-26 15:10:31.565911] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.893 [2024-04-26 15:10:31.565925] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.893 [2024-04-26 15:10:31.565954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.893 qpair failed and we were unable to recover it. 00:29:45.893 [2024-04-26 15:10:31.575784] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.893 [2024-04-26 15:10:31.575901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.894 [2024-04-26 15:10:31.575927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.894 [2024-04-26 15:10:31.575941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.894 [2024-04-26 15:10:31.575954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.894 [2024-04-26 15:10:31.575993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.894 qpair failed and we were unable to recover it. 00:29:45.894 [2024-04-26 15:10:31.585780] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.894 [2024-04-26 15:10:31.585889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.894 [2024-04-26 15:10:31.585915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.894 [2024-04-26 15:10:31.585929] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.894 [2024-04-26 15:10:31.585942] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.894 [2024-04-26 15:10:31.585970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.894 qpair failed and we were unable to recover it. 00:29:45.894 [2024-04-26 15:10:31.595912] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.894 [2024-04-26 15:10:31.596041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.894 [2024-04-26 15:10:31.596067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.894 [2024-04-26 15:10:31.596082] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.894 [2024-04-26 15:10:31.596095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.894 [2024-04-26 15:10:31.596134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.894 qpair failed and we were unable to recover it. 00:29:45.894 [2024-04-26 15:10:31.605896] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.894 [2024-04-26 15:10:31.606058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.894 [2024-04-26 15:10:31.606084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.894 [2024-04-26 15:10:31.606098] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.894 [2024-04-26 15:10:31.606116] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.894 [2024-04-26 15:10:31.606147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.894 qpair failed and we were unable to recover it. 00:29:45.894 [2024-04-26 15:10:31.615920] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.894 [2024-04-26 15:10:31.616049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.894 [2024-04-26 15:10:31.616076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.894 [2024-04-26 15:10:31.616091] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.894 [2024-04-26 15:10:31.616104] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.894 [2024-04-26 15:10:31.616133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.894 qpair failed and we were unable to recover it. 00:29:45.894 [2024-04-26 15:10:31.625929] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.894 [2024-04-26 15:10:31.626065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.894 [2024-04-26 15:10:31.626091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.894 [2024-04-26 15:10:31.626117] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.894 [2024-04-26 15:10:31.626130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:45.894 [2024-04-26 15:10:31.626160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.894 qpair failed and we were unable to recover it. 00:29:46.154 [2024-04-26 15:10:31.635975] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.154 [2024-04-26 15:10:31.636103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.154 [2024-04-26 15:10:31.636128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.154 [2024-04-26 15:10:31.636143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.154 [2024-04-26 15:10:31.636156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.154 [2024-04-26 15:10:31.636185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.154 qpair failed and we were unable to recover it. 00:29:46.154 [2024-04-26 15:10:31.645967] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.154 [2024-04-26 15:10:31.646088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.154 [2024-04-26 15:10:31.646113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.154 [2024-04-26 15:10:31.646128] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.154 [2024-04-26 15:10:31.646142] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.154 [2024-04-26 15:10:31.646171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.154 qpair failed and we were unable to recover it. 00:29:46.154 [2024-04-26 15:10:31.656031] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.154 [2024-04-26 15:10:31.656160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.154 [2024-04-26 15:10:31.656185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.154 [2024-04-26 15:10:31.656200] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.154 [2024-04-26 15:10:31.656213] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.154 [2024-04-26 15:10:31.656242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.154 qpair failed and we were unable to recover it. 00:29:46.154 [2024-04-26 15:10:31.666079] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.154 [2024-04-26 15:10:31.666186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.154 [2024-04-26 15:10:31.666211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.154 [2024-04-26 15:10:31.666226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.154 [2024-04-26 15:10:31.666239] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.154 [2024-04-26 15:10:31.666269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.154 qpair failed and we were unable to recover it. 00:29:46.154 [2024-04-26 15:10:31.676140] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.154 [2024-04-26 15:10:31.676243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.155 [2024-04-26 15:10:31.676268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.155 [2024-04-26 15:10:31.676282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.155 [2024-04-26 15:10:31.676295] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.155 [2024-04-26 15:10:31.676339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.155 qpair failed and we were unable to recover it. 00:29:46.155 [2024-04-26 15:10:31.686093] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.155 [2024-04-26 15:10:31.686201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.155 [2024-04-26 15:10:31.686226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.155 [2024-04-26 15:10:31.686241] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.155 [2024-04-26 15:10:31.686254] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.155 [2024-04-26 15:10:31.686284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.155 qpair failed and we were unable to recover it. 00:29:46.155 [2024-04-26 15:10:31.696125] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.155 [2024-04-26 15:10:31.696235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.155 [2024-04-26 15:10:31.696260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.155 [2024-04-26 15:10:31.696275] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.155 [2024-04-26 15:10:31.696294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.155 [2024-04-26 15:10:31.696324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.155 qpair failed and we were unable to recover it. 00:29:46.155 [2024-04-26 15:10:31.706169] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.155 [2024-04-26 15:10:31.706275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.155 [2024-04-26 15:10:31.706314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.155 [2024-04-26 15:10:31.706330] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.155 [2024-04-26 15:10:31.706342] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.155 [2024-04-26 15:10:31.706370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.155 qpair failed and we were unable to recover it. 00:29:46.155 [2024-04-26 15:10:31.716211] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.155 [2024-04-26 15:10:31.716356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.155 [2024-04-26 15:10:31.716381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.155 [2024-04-26 15:10:31.716411] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.155 [2024-04-26 15:10:31.716423] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.155 [2024-04-26 15:10:31.716452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.155 qpair failed and we were unable to recover it. 00:29:46.155 [2024-04-26 15:10:31.726264] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.155 [2024-04-26 15:10:31.726398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.155 [2024-04-26 15:10:31.726422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.155 [2024-04-26 15:10:31.726437] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.155 [2024-04-26 15:10:31.726450] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.155 [2024-04-26 15:10:31.726478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.155 qpair failed and we were unable to recover it. 00:29:46.155 [2024-04-26 15:10:31.736294] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.155 [2024-04-26 15:10:31.736470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.155 [2024-04-26 15:10:31.736494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.155 [2024-04-26 15:10:31.736508] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.155 [2024-04-26 15:10:31.736521] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.155 [2024-04-26 15:10:31.736549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.155 qpair failed and we were unable to recover it. 00:29:46.155 [2024-04-26 15:10:31.746336] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.155 [2024-04-26 15:10:31.746457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.155 [2024-04-26 15:10:31.746482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.155 [2024-04-26 15:10:31.746496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.155 [2024-04-26 15:10:31.746510] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.155 [2024-04-26 15:10:31.746538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.155 qpair failed and we were unable to recover it. 00:29:46.155 [2024-04-26 15:10:31.756425] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.155 [2024-04-26 15:10:31.756526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.155 [2024-04-26 15:10:31.756551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.155 [2024-04-26 15:10:31.756565] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.155 [2024-04-26 15:10:31.756578] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.155 [2024-04-26 15:10:31.756607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.155 qpair failed and we were unable to recover it. 00:29:46.155 [2024-04-26 15:10:31.766327] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.155 [2024-04-26 15:10:31.766442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.155 [2024-04-26 15:10:31.766467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.155 [2024-04-26 15:10:31.766482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.155 [2024-04-26 15:10:31.766494] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.155 [2024-04-26 15:10:31.766522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.155 qpair failed and we were unable to recover it. 00:29:46.155 [2024-04-26 15:10:31.776372] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.155 [2024-04-26 15:10:31.776524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.155 [2024-04-26 15:10:31.776548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.155 [2024-04-26 15:10:31.776562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.155 [2024-04-26 15:10:31.776575] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.155 [2024-04-26 15:10:31.776602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.155 qpair failed and we were unable to recover it. 00:29:46.155 [2024-04-26 15:10:31.786396] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.155 [2024-04-26 15:10:31.786499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.155 [2024-04-26 15:10:31.786523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.155 [2024-04-26 15:10:31.786537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.155 [2024-04-26 15:10:31.786554] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.155 [2024-04-26 15:10:31.786584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.155 qpair failed and we were unable to recover it. 00:29:46.155 [2024-04-26 15:10:31.796462] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.155 [2024-04-26 15:10:31.796605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.155 [2024-04-26 15:10:31.796630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.155 [2024-04-26 15:10:31.796644] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.155 [2024-04-26 15:10:31.796657] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.155 [2024-04-26 15:10:31.796685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.155 qpair failed and we were unable to recover it. 00:29:46.156 [2024-04-26 15:10:31.806463] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.156 [2024-04-26 15:10:31.806561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.156 [2024-04-26 15:10:31.806586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.156 [2024-04-26 15:10:31.806601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.156 [2024-04-26 15:10:31.806614] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.156 [2024-04-26 15:10:31.806643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.156 qpair failed and we were unable to recover it. 00:29:46.156 [2024-04-26 15:10:31.816479] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.156 [2024-04-26 15:10:31.816585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.156 [2024-04-26 15:10:31.816623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.156 [2024-04-26 15:10:31.816637] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.156 [2024-04-26 15:10:31.816650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.156 [2024-04-26 15:10:31.816679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.156 qpair failed and we were unable to recover it. 00:29:46.156 [2024-04-26 15:10:31.826484] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.156 [2024-04-26 15:10:31.826586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.156 [2024-04-26 15:10:31.826611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.156 [2024-04-26 15:10:31.826625] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.156 [2024-04-26 15:10:31.826638] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.156 [2024-04-26 15:10:31.826666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.156 qpair failed and we were unable to recover it. 00:29:46.156 [2024-04-26 15:10:31.836532] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.156 [2024-04-26 15:10:31.836653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.156 [2024-04-26 15:10:31.836677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.156 [2024-04-26 15:10:31.836692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.156 [2024-04-26 15:10:31.836705] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.156 [2024-04-26 15:10:31.836733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.156 qpair failed and we were unable to recover it. 00:29:46.156 [2024-04-26 15:10:31.846537] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.156 [2024-04-26 15:10:31.846647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.156 [2024-04-26 15:10:31.846673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.156 [2024-04-26 15:10:31.846688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.156 [2024-04-26 15:10:31.846700] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.156 [2024-04-26 15:10:31.846729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.156 qpair failed and we were unable to recover it. 00:29:46.156 [2024-04-26 15:10:31.856630] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.156 [2024-04-26 15:10:31.856774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.156 [2024-04-26 15:10:31.856800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.156 [2024-04-26 15:10:31.856815] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.156 [2024-04-26 15:10:31.856827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.156 [2024-04-26 15:10:31.856856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.156 qpair failed and we were unable to recover it. 00:29:46.156 [2024-04-26 15:10:31.866620] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.156 [2024-04-26 15:10:31.866728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.156 [2024-04-26 15:10:31.866754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.156 [2024-04-26 15:10:31.866769] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.156 [2024-04-26 15:10:31.866781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.156 [2024-04-26 15:10:31.866810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.156 qpair failed and we were unable to recover it. 00:29:46.156 [2024-04-26 15:10:31.876617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.156 [2024-04-26 15:10:31.876712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.156 [2024-04-26 15:10:31.876736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.156 [2024-04-26 15:10:31.876756] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.156 [2024-04-26 15:10:31.876769] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.156 [2024-04-26 15:10:31.876797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.156 qpair failed and we were unable to recover it. 00:29:46.156 [2024-04-26 15:10:31.886682] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.156 [2024-04-26 15:10:31.886791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.156 [2024-04-26 15:10:31.886815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.156 [2024-04-26 15:10:31.886829] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.156 [2024-04-26 15:10:31.886841] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.156 [2024-04-26 15:10:31.886869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.156 qpair failed and we were unable to recover it. 00:29:46.415 [2024-04-26 15:10:31.896693] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.415 [2024-04-26 15:10:31.896806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.415 [2024-04-26 15:10:31.896832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.415 [2024-04-26 15:10:31.896847] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.415 [2024-04-26 15:10:31.896859] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.415 [2024-04-26 15:10:31.896887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.415 qpair failed and we were unable to recover it. 00:29:46.415 [2024-04-26 15:10:31.906742] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.415 [2024-04-26 15:10:31.906915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.415 [2024-04-26 15:10:31.906942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.415 [2024-04-26 15:10:31.906958] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.415 [2024-04-26 15:10:31.906971] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.415 [2024-04-26 15:10:31.907014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.415 qpair failed and we were unable to recover it. 00:29:46.415 [2024-04-26 15:10:31.916721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.415 [2024-04-26 15:10:31.916826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.415 [2024-04-26 15:10:31.916853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.415 [2024-04-26 15:10:31.916867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.415 [2024-04-26 15:10:31.916879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.415 [2024-04-26 15:10:31.916907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.415 qpair failed and we were unable to recover it. 00:29:46.415 [2024-04-26 15:10:31.926741] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.415 [2024-04-26 15:10:31.926854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.415 [2024-04-26 15:10:31.926880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.415 [2024-04-26 15:10:31.926895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.415 [2024-04-26 15:10:31.926908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.415 [2024-04-26 15:10:31.926937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.415 qpair failed and we were unable to recover it. 00:29:46.415 [2024-04-26 15:10:31.936791] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.415 [2024-04-26 15:10:31.936895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.415 [2024-04-26 15:10:31.936921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.415 [2024-04-26 15:10:31.936936] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.415 [2024-04-26 15:10:31.936949] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.415 [2024-04-26 15:10:31.936977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.415 qpair failed and we were unable to recover it. 00:29:46.416 [2024-04-26 15:10:31.946937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.416 [2024-04-26 15:10:31.947066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.416 [2024-04-26 15:10:31.947094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.416 [2024-04-26 15:10:31.947110] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.416 [2024-04-26 15:10:31.947123] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.416 [2024-04-26 15:10:31.947151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.416 qpair failed and we were unable to recover it. 00:29:46.416 [2024-04-26 15:10:31.956813] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.416 [2024-04-26 15:10:31.956916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.416 [2024-04-26 15:10:31.956942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.416 [2024-04-26 15:10:31.956957] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.416 [2024-04-26 15:10:31.956970] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.416 [2024-04-26 15:10:31.956997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.416 qpair failed and we were unable to recover it. 00:29:46.416 [2024-04-26 15:10:31.966892] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.416 [2024-04-26 15:10:31.967011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.416 [2024-04-26 15:10:31.967049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.416 [2024-04-26 15:10:31.967071] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.416 [2024-04-26 15:10:31.967085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.416 [2024-04-26 15:10:31.967114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.416 qpair failed and we were unable to recover it. 00:29:46.416 [2024-04-26 15:10:31.976880] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.416 [2024-04-26 15:10:31.977015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.416 [2024-04-26 15:10:31.977048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.416 [2024-04-26 15:10:31.977063] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.416 [2024-04-26 15:10:31.977075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.416 [2024-04-26 15:10:31.977105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.416 qpair failed and we were unable to recover it. 00:29:46.416 [2024-04-26 15:10:31.986951] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.416 [2024-04-26 15:10:31.987095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.416 [2024-04-26 15:10:31.987122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.416 [2024-04-26 15:10:31.987137] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.416 [2024-04-26 15:10:31.987150] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.416 [2024-04-26 15:10:31.987179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.416 qpair failed and we were unable to recover it. 00:29:46.416 [2024-04-26 15:10:31.996925] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.416 [2024-04-26 15:10:31.997066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.416 [2024-04-26 15:10:31.997092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.416 [2024-04-26 15:10:31.997106] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.416 [2024-04-26 15:10:31.997119] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.416 [2024-04-26 15:10:31.997148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.416 qpair failed and we were unable to recover it. 00:29:46.416 [2024-04-26 15:10:32.006969] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.416 [2024-04-26 15:10:32.007108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.416 [2024-04-26 15:10:32.007135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.416 [2024-04-26 15:10:32.007151] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.416 [2024-04-26 15:10:32.007163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.416 [2024-04-26 15:10:32.007193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.416 qpair failed and we were unable to recover it. 00:29:46.416 [2024-04-26 15:10:32.017090] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.416 [2024-04-26 15:10:32.017192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.416 [2024-04-26 15:10:32.017219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.416 [2024-04-26 15:10:32.017235] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.416 [2024-04-26 15:10:32.017247] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.416 [2024-04-26 15:10:32.017275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.416 qpair failed and we were unable to recover it. 00:29:46.416 [2024-04-26 15:10:32.027051] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.416 [2024-04-26 15:10:32.027155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.416 [2024-04-26 15:10:32.027181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.416 [2024-04-26 15:10:32.027196] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.416 [2024-04-26 15:10:32.027208] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.416 [2024-04-26 15:10:32.027237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.416 qpair failed and we were unable to recover it. 00:29:46.416 [2024-04-26 15:10:32.037073] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.416 [2024-04-26 15:10:32.037175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.416 [2024-04-26 15:10:32.037202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.416 [2024-04-26 15:10:32.037217] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.416 [2024-04-26 15:10:32.037230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.416 [2024-04-26 15:10:32.037259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.416 qpair failed and we were unable to recover it. 00:29:46.416 [2024-04-26 15:10:32.047102] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.416 [2024-04-26 15:10:32.047209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.416 [2024-04-26 15:10:32.047236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.416 [2024-04-26 15:10:32.047251] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.416 [2024-04-26 15:10:32.047263] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.416 [2024-04-26 15:10:32.047307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.416 qpair failed and we were unable to recover it. 00:29:46.416 [2024-04-26 15:10:32.057134] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.416 [2024-04-26 15:10:32.057241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.416 [2024-04-26 15:10:32.057266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.416 [2024-04-26 15:10:32.057287] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.416 [2024-04-26 15:10:32.057301] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.416 [2024-04-26 15:10:32.057330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.416 qpair failed and we were unable to recover it. 00:29:46.416 [2024-04-26 15:10:32.067148] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.416 [2024-04-26 15:10:32.067252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.416 [2024-04-26 15:10:32.067279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.416 [2024-04-26 15:10:32.067295] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.416 [2024-04-26 15:10:32.067307] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.416 [2024-04-26 15:10:32.067350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.416 qpair failed and we were unable to recover it. 00:29:46.416 [2024-04-26 15:10:32.077171] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.416 [2024-04-26 15:10:32.077278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.417 [2024-04-26 15:10:32.077305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.417 [2024-04-26 15:10:32.077335] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.417 [2024-04-26 15:10:32.077348] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.417 [2024-04-26 15:10:32.077376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.417 qpair failed and we were unable to recover it. 00:29:46.417 [2024-04-26 15:10:32.087199] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.417 [2024-04-26 15:10:32.087325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.417 [2024-04-26 15:10:32.087351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.417 [2024-04-26 15:10:32.087366] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.417 [2024-04-26 15:10:32.087378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.417 [2024-04-26 15:10:32.087406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.417 qpair failed and we were unable to recover it. 00:29:46.417 [2024-04-26 15:10:32.097278] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.417 [2024-04-26 15:10:32.097400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.417 [2024-04-26 15:10:32.097426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.417 [2024-04-26 15:10:32.097441] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.417 [2024-04-26 15:10:32.097454] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.417 [2024-04-26 15:10:32.097482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.417 qpair failed and we were unable to recover it. 00:29:46.417 [2024-04-26 15:10:32.107278] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.417 [2024-04-26 15:10:32.107396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.417 [2024-04-26 15:10:32.107421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.417 [2024-04-26 15:10:32.107436] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.417 [2024-04-26 15:10:32.107447] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.417 [2024-04-26 15:10:32.107475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.417 qpair failed and we were unable to recover it. 00:29:46.417 [2024-04-26 15:10:32.117279] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.417 [2024-04-26 15:10:32.117403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.417 [2024-04-26 15:10:32.117429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.417 [2024-04-26 15:10:32.117444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.417 [2024-04-26 15:10:32.117456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.417 [2024-04-26 15:10:32.117484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.417 qpair failed and we were unable to recover it. 00:29:46.417 [2024-04-26 15:10:32.127340] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.417 [2024-04-26 15:10:32.127435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.417 [2024-04-26 15:10:32.127459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.417 [2024-04-26 15:10:32.127473] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.417 [2024-04-26 15:10:32.127485] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.417 [2024-04-26 15:10:32.127512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.417 qpair failed and we were unable to recover it. 00:29:46.417 [2024-04-26 15:10:32.137371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.417 [2024-04-26 15:10:32.137473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.417 [2024-04-26 15:10:32.137499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.417 [2024-04-26 15:10:32.137513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.417 [2024-04-26 15:10:32.137525] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.417 [2024-04-26 15:10:32.137553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.417 qpair failed and we were unable to recover it. 00:29:46.417 [2024-04-26 15:10:32.147355] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.417 [2024-04-26 15:10:32.147459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.417 [2024-04-26 15:10:32.147490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.417 [2024-04-26 15:10:32.147506] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.417 [2024-04-26 15:10:32.147518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.417 [2024-04-26 15:10:32.147547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.417 qpair failed and we were unable to recover it. 00:29:46.676 [2024-04-26 15:10:32.157390] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.676 [2024-04-26 15:10:32.157498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.676 [2024-04-26 15:10:32.157523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.676 [2024-04-26 15:10:32.157537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.676 [2024-04-26 15:10:32.157549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.676 [2024-04-26 15:10:32.157578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.676 qpair failed and we were unable to recover it. 00:29:46.676 [2024-04-26 15:10:32.167432] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.676 [2024-04-26 15:10:32.167527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.676 [2024-04-26 15:10:32.167553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.676 [2024-04-26 15:10:32.167567] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.676 [2024-04-26 15:10:32.167580] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.676 [2024-04-26 15:10:32.167608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.676 qpair failed and we were unable to recover it. 00:29:46.676 [2024-04-26 15:10:32.177469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.676 [2024-04-26 15:10:32.177579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.676 [2024-04-26 15:10:32.177604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.676 [2024-04-26 15:10:32.177619] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.676 [2024-04-26 15:10:32.177631] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.676 [2024-04-26 15:10:32.177659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.676 qpair failed and we were unable to recover it. 00:29:46.676 [2024-04-26 15:10:32.187485] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.676 [2024-04-26 15:10:32.187585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.676 [2024-04-26 15:10:32.187609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.676 [2024-04-26 15:10:32.187623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.676 [2024-04-26 15:10:32.187635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.676 [2024-04-26 15:10:32.187663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.676 qpair failed and we were unable to recover it. 00:29:46.676 [2024-04-26 15:10:32.197506] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.676 [2024-04-26 15:10:32.197609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.676 [2024-04-26 15:10:32.197635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.676 [2024-04-26 15:10:32.197650] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.676 [2024-04-26 15:10:32.197662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.676 [2024-04-26 15:10:32.197690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.676 qpair failed and we were unable to recover it. 00:29:46.676 [2024-04-26 15:10:32.207543] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.676 [2024-04-26 15:10:32.207641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.676 [2024-04-26 15:10:32.207667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.676 [2024-04-26 15:10:32.207682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.676 [2024-04-26 15:10:32.207694] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.676 [2024-04-26 15:10:32.207722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.676 qpair failed and we were unable to recover it. 00:29:46.676 [2024-04-26 15:10:32.217579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.676 [2024-04-26 15:10:32.217712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.676 [2024-04-26 15:10:32.217738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.676 [2024-04-26 15:10:32.217752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.676 [2024-04-26 15:10:32.217764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.676 [2024-04-26 15:10:32.217792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.676 qpair failed and we were unable to recover it. 00:29:46.676 [2024-04-26 15:10:32.227618] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.676 [2024-04-26 15:10:32.227717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.676 [2024-04-26 15:10:32.227741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.676 [2024-04-26 15:10:32.227755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.676 [2024-04-26 15:10:32.227767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.676 [2024-04-26 15:10:32.227795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.676 qpair failed and we were unable to recover it. 00:29:46.676 [2024-04-26 15:10:32.237626] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.676 [2024-04-26 15:10:32.237724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.676 [2024-04-26 15:10:32.237753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.676 [2024-04-26 15:10:32.237769] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.676 [2024-04-26 15:10:32.237781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.676 [2024-04-26 15:10:32.237809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.676 qpair failed and we were unable to recover it. 00:29:46.676 [2024-04-26 15:10:32.247667] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.676 [2024-04-26 15:10:32.247760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.676 [2024-04-26 15:10:32.247784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.676 [2024-04-26 15:10:32.247797] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.676 [2024-04-26 15:10:32.247810] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.676 [2024-04-26 15:10:32.247838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.676 qpair failed and we were unable to recover it. 00:29:46.676 [2024-04-26 15:10:32.257678] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.676 [2024-04-26 15:10:32.257782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.677 [2024-04-26 15:10:32.257807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.677 [2024-04-26 15:10:32.257822] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.677 [2024-04-26 15:10:32.257834] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.677 [2024-04-26 15:10:32.257862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.677 qpair failed and we were unable to recover it. 00:29:46.677 [2024-04-26 15:10:32.267721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.677 [2024-04-26 15:10:32.267834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.677 [2024-04-26 15:10:32.267860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.677 [2024-04-26 15:10:32.267875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.677 [2024-04-26 15:10:32.267887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.677 [2024-04-26 15:10:32.267915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.677 qpair failed and we were unable to recover it. 00:29:46.677 [2024-04-26 15:10:32.277752] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.677 [2024-04-26 15:10:32.277856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.677 [2024-04-26 15:10:32.277881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.677 [2024-04-26 15:10:32.277895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.677 [2024-04-26 15:10:32.277907] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.677 [2024-04-26 15:10:32.277941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.677 qpair failed and we were unable to recover it. 00:29:46.677 [2024-04-26 15:10:32.287756] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.677 [2024-04-26 15:10:32.287848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.677 [2024-04-26 15:10:32.287872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.677 [2024-04-26 15:10:32.287886] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.677 [2024-04-26 15:10:32.287899] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.677 [2024-04-26 15:10:32.287926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.677 qpair failed and we were unable to recover it. 00:29:46.677 [2024-04-26 15:10:32.297854] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.677 [2024-04-26 15:10:32.298016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.677 [2024-04-26 15:10:32.298052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.677 [2024-04-26 15:10:32.298068] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.677 [2024-04-26 15:10:32.298080] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.677 [2024-04-26 15:10:32.298110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.677 qpair failed and we were unable to recover it. 00:29:46.677 [2024-04-26 15:10:32.307880] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.677 [2024-04-26 15:10:32.308053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.677 [2024-04-26 15:10:32.308081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.677 [2024-04-26 15:10:32.308096] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.677 [2024-04-26 15:10:32.308109] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.677 [2024-04-26 15:10:32.308138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.677 qpair failed and we were unable to recover it. 00:29:46.677 [2024-04-26 15:10:32.317846] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.677 [2024-04-26 15:10:32.317940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.677 [2024-04-26 15:10:32.317963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.677 [2024-04-26 15:10:32.317978] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.677 [2024-04-26 15:10:32.317990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.677 [2024-04-26 15:10:32.318042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.677 qpair failed and we were unable to recover it. 00:29:46.677 [2024-04-26 15:10:32.327951] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.677 [2024-04-26 15:10:32.328062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.677 [2024-04-26 15:10:32.328094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.677 [2024-04-26 15:10:32.328111] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.677 [2024-04-26 15:10:32.328123] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.677 [2024-04-26 15:10:32.328153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.677 qpair failed and we were unable to recover it. 00:29:46.677 [2024-04-26 15:10:32.337925] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.677 [2024-04-26 15:10:32.338068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.677 [2024-04-26 15:10:32.338095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.677 [2024-04-26 15:10:32.338110] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.677 [2024-04-26 15:10:32.338123] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.677 [2024-04-26 15:10:32.338152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.677 qpair failed and we were unable to recover it. 00:29:46.677 [2024-04-26 15:10:32.347950] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.677 [2024-04-26 15:10:32.348070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.677 [2024-04-26 15:10:32.348097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.677 [2024-04-26 15:10:32.348112] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.677 [2024-04-26 15:10:32.348125] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.677 [2024-04-26 15:10:32.348154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.677 qpair failed and we were unable to recover it. 00:29:46.677 [2024-04-26 15:10:32.357972] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.677 [2024-04-26 15:10:32.358108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.677 [2024-04-26 15:10:32.358135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.677 [2024-04-26 15:10:32.358150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.677 [2024-04-26 15:10:32.358163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.677 [2024-04-26 15:10:32.358192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.677 qpair failed and we were unable to recover it. 00:29:46.677 [2024-04-26 15:10:32.367992] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.677 [2024-04-26 15:10:32.368114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.677 [2024-04-26 15:10:32.368141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.677 [2024-04-26 15:10:32.368156] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.677 [2024-04-26 15:10:32.368168] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.677 [2024-04-26 15:10:32.368202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.677 qpair failed and we were unable to recover it. 00:29:46.677 [2024-04-26 15:10:32.378113] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.677 [2024-04-26 15:10:32.378216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.677 [2024-04-26 15:10:32.378243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.677 [2024-04-26 15:10:32.378257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.677 [2024-04-26 15:10:32.378270] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.677 [2024-04-26 15:10:32.378299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.677 qpair failed and we were unable to recover it. 00:29:46.677 [2024-04-26 15:10:32.388071] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.677 [2024-04-26 15:10:32.388175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.677 [2024-04-26 15:10:32.388201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.677 [2024-04-26 15:10:32.388216] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.677 [2024-04-26 15:10:32.388228] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.677 [2024-04-26 15:10:32.388257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.677 qpair failed and we were unable to recover it. 00:29:46.677 [2024-04-26 15:10:32.398082] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.677 [2024-04-26 15:10:32.398201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.678 [2024-04-26 15:10:32.398228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.678 [2024-04-26 15:10:32.398243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.678 [2024-04-26 15:10:32.398256] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.678 [2024-04-26 15:10:32.398285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.678 qpair failed and we were unable to recover it. 00:29:46.678 [2024-04-26 15:10:32.408109] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.678 [2024-04-26 15:10:32.408211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.678 [2024-04-26 15:10:32.408237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.678 [2024-04-26 15:10:32.408252] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.678 [2024-04-26 15:10:32.408265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.678 [2024-04-26 15:10:32.408294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.678 qpair failed and we were unable to recover it. 00:29:46.936 [2024-04-26 15:10:32.418152] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.936 [2024-04-26 15:10:32.418275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.936 [2024-04-26 15:10:32.418320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.936 [2024-04-26 15:10:32.418336] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.936 [2024-04-26 15:10:32.418348] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.936 [2024-04-26 15:10:32.418376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.936 qpair failed and we were unable to recover it. 00:29:46.936 [2024-04-26 15:10:32.428235] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.936 [2024-04-26 15:10:32.428388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.936 [2024-04-26 15:10:32.428414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.936 [2024-04-26 15:10:32.428430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.936 [2024-04-26 15:10:32.428442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.937 [2024-04-26 15:10:32.428485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.937 qpair failed and we were unable to recover it. 00:29:46.937 [2024-04-26 15:10:32.438196] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.937 [2024-04-26 15:10:32.438318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.937 [2024-04-26 15:10:32.438343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.937 [2024-04-26 15:10:32.438358] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.937 [2024-04-26 15:10:32.438370] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.937 [2024-04-26 15:10:32.438398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.937 qpair failed and we were unable to recover it. 00:29:46.937 [2024-04-26 15:10:32.448231] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.937 [2024-04-26 15:10:32.448337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.937 [2024-04-26 15:10:32.448361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.937 [2024-04-26 15:10:32.448376] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.937 [2024-04-26 15:10:32.448388] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.937 [2024-04-26 15:10:32.448416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.937 qpair failed and we were unable to recover it. 00:29:46.937 [2024-04-26 15:10:32.458271] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.937 [2024-04-26 15:10:32.458395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.937 [2024-04-26 15:10:32.458421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.937 [2024-04-26 15:10:32.458436] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.937 [2024-04-26 15:10:32.458448] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.937 [2024-04-26 15:10:32.458481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.937 qpair failed and we were unable to recover it. 00:29:46.937 [2024-04-26 15:10:32.468286] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.937 [2024-04-26 15:10:32.468409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.937 [2024-04-26 15:10:32.468434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.937 [2024-04-26 15:10:32.468449] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.937 [2024-04-26 15:10:32.468461] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.937 [2024-04-26 15:10:32.468490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.937 qpair failed and we were unable to recover it. 00:29:46.937 [2024-04-26 15:10:32.478318] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.937 [2024-04-26 15:10:32.478429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.937 [2024-04-26 15:10:32.478455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.937 [2024-04-26 15:10:32.478469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.937 [2024-04-26 15:10:32.478481] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.937 [2024-04-26 15:10:32.478509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.937 qpair failed and we were unable to recover it. 00:29:46.937 [2024-04-26 15:10:32.488434] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.937 [2024-04-26 15:10:32.488532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.937 [2024-04-26 15:10:32.488557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.937 [2024-04-26 15:10:32.488571] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.937 [2024-04-26 15:10:32.488583] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.937 [2024-04-26 15:10:32.488611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.937 qpair failed and we were unable to recover it. 00:29:46.937 [2024-04-26 15:10:32.498354] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.937 [2024-04-26 15:10:32.498477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.937 [2024-04-26 15:10:32.498502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.937 [2024-04-26 15:10:32.498517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.937 [2024-04-26 15:10:32.498529] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.937 [2024-04-26 15:10:32.498557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.937 qpair failed and we were unable to recover it. 00:29:46.937 [2024-04-26 15:10:32.508404] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.937 [2024-04-26 15:10:32.508507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.937 [2024-04-26 15:10:32.508537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.937 [2024-04-26 15:10:32.508552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.937 [2024-04-26 15:10:32.508565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.937 [2024-04-26 15:10:32.508593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.937 qpair failed and we were unable to recover it. 00:29:46.937 [2024-04-26 15:10:32.518457] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.937 [2024-04-26 15:10:32.518555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.937 [2024-04-26 15:10:32.518580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.937 [2024-04-26 15:10:32.518595] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.937 [2024-04-26 15:10:32.518607] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.937 [2024-04-26 15:10:32.518635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.937 qpair failed and we were unable to recover it. 00:29:46.937 [2024-04-26 15:10:32.528460] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.937 [2024-04-26 15:10:32.528561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.937 [2024-04-26 15:10:32.528587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.937 [2024-04-26 15:10:32.528601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.937 [2024-04-26 15:10:32.528614] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.937 [2024-04-26 15:10:32.528642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.937 qpair failed and we were unable to recover it. 00:29:46.937 [2024-04-26 15:10:32.538497] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.937 [2024-04-26 15:10:32.538637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.937 [2024-04-26 15:10:32.538662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.937 [2024-04-26 15:10:32.538676] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.937 [2024-04-26 15:10:32.538689] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.937 [2024-04-26 15:10:32.538716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.937 qpair failed and we were unable to recover it. 00:29:46.937 [2024-04-26 15:10:32.548513] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.937 [2024-04-26 15:10:32.548640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.937 [2024-04-26 15:10:32.548665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.937 [2024-04-26 15:10:32.548680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.937 [2024-04-26 15:10:32.548697] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.937 [2024-04-26 15:10:32.548725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.937 qpair failed and we were unable to recover it. 00:29:46.937 [2024-04-26 15:10:32.558546] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.937 [2024-04-26 15:10:32.558661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.937 [2024-04-26 15:10:32.558686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.937 [2024-04-26 15:10:32.558700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.937 [2024-04-26 15:10:32.558713] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.937 [2024-04-26 15:10:32.558740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.937 qpair failed and we were unable to recover it. 00:29:46.937 [2024-04-26 15:10:32.568586] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.937 [2024-04-26 15:10:32.568681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.938 [2024-04-26 15:10:32.568711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.938 [2024-04-26 15:10:32.568726] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.938 [2024-04-26 15:10:32.568738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.938 [2024-04-26 15:10:32.568767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.938 qpair failed and we were unable to recover it. 00:29:46.938 [2024-04-26 15:10:32.578719] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.938 [2024-04-26 15:10:32.578851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.938 [2024-04-26 15:10:32.578877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.938 [2024-04-26 15:10:32.578892] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.938 [2024-04-26 15:10:32.578904] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.938 [2024-04-26 15:10:32.578931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.938 qpair failed and we were unable to recover it. 00:29:46.938 [2024-04-26 15:10:32.588650] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.938 [2024-04-26 15:10:32.588775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.938 [2024-04-26 15:10:32.588801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.938 [2024-04-26 15:10:32.588816] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.938 [2024-04-26 15:10:32.588828] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.938 [2024-04-26 15:10:32.588856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.938 qpair failed and we were unable to recover it. 00:29:46.938 [2024-04-26 15:10:32.598650] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.938 [2024-04-26 15:10:32.598751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.938 [2024-04-26 15:10:32.598775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.938 [2024-04-26 15:10:32.598789] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.938 [2024-04-26 15:10:32.598802] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.938 [2024-04-26 15:10:32.598829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.938 qpair failed and we were unable to recover it. 00:29:46.938 [2024-04-26 15:10:32.608758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.938 [2024-04-26 15:10:32.608857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.938 [2024-04-26 15:10:32.608883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.938 [2024-04-26 15:10:32.608898] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.938 [2024-04-26 15:10:32.608911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.938 [2024-04-26 15:10:32.608939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.938 qpair failed and we were unable to recover it. 00:29:46.938 [2024-04-26 15:10:32.618720] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.938 [2024-04-26 15:10:32.618823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.938 [2024-04-26 15:10:32.618847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.938 [2024-04-26 15:10:32.618861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.938 [2024-04-26 15:10:32.618873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.938 [2024-04-26 15:10:32.618901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.938 qpair failed and we were unable to recover it. 00:29:46.938 [2024-04-26 15:10:32.628733] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.938 [2024-04-26 15:10:32.628836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.938 [2024-04-26 15:10:32.628860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.938 [2024-04-26 15:10:32.628874] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.938 [2024-04-26 15:10:32.628886] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.938 [2024-04-26 15:10:32.628914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.938 qpair failed and we were unable to recover it. 00:29:46.938 [2024-04-26 15:10:32.638787] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.938 [2024-04-26 15:10:32.638885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.938 [2024-04-26 15:10:32.638911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.938 [2024-04-26 15:10:32.638925] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.938 [2024-04-26 15:10:32.638942] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.938 [2024-04-26 15:10:32.638970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.938 qpair failed and we were unable to recover it. 00:29:46.938 [2024-04-26 15:10:32.648785] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.938 [2024-04-26 15:10:32.648883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.938 [2024-04-26 15:10:32.648909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.938 [2024-04-26 15:10:32.648923] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.938 [2024-04-26 15:10:32.648935] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.938 [2024-04-26 15:10:32.648963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.938 qpair failed and we were unable to recover it. 00:29:46.938 [2024-04-26 15:10:32.658851] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.938 [2024-04-26 15:10:32.658964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.938 [2024-04-26 15:10:32.658989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.938 [2024-04-26 15:10:32.659004] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.938 [2024-04-26 15:10:32.659016] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.938 [2024-04-26 15:10:32.659084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.938 qpair failed and we were unable to recover it. 00:29:46.938 [2024-04-26 15:10:32.668849] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.938 [2024-04-26 15:10:32.668976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.938 [2024-04-26 15:10:32.669017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.938 [2024-04-26 15:10:32.669048] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.938 [2024-04-26 15:10:32.669062] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:46.938 [2024-04-26 15:10:32.669092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.938 qpair failed and we were unable to recover it. 00:29:47.197 [2024-04-26 15:10:32.678892] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.197 [2024-04-26 15:10:32.679016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.197 [2024-04-26 15:10:32.679052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.197 [2024-04-26 15:10:32.679067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.197 [2024-04-26 15:10:32.679079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.197 [2024-04-26 15:10:32.679109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.197 qpair failed and we were unable to recover it. 00:29:47.197 [2024-04-26 15:10:32.688945] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.197 [2024-04-26 15:10:32.689069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.197 [2024-04-26 15:10:32.689097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.197 [2024-04-26 15:10:32.689112] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.197 [2024-04-26 15:10:32.689124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.197 [2024-04-26 15:10:32.689154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.197 qpair failed and we were unable to recover it. 00:29:47.197 [2024-04-26 15:10:32.698943] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.197 [2024-04-26 15:10:32.699067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.197 [2024-04-26 15:10:32.699093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.197 [2024-04-26 15:10:32.699109] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.197 [2024-04-26 15:10:32.699122] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.197 [2024-04-26 15:10:32.699151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.197 qpair failed and we were unable to recover it. 00:29:47.197 [2024-04-26 15:10:32.708990] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.197 [2024-04-26 15:10:32.709112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.197 [2024-04-26 15:10:32.709139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.197 [2024-04-26 15:10:32.709154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.197 [2024-04-26 15:10:32.709167] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.197 [2024-04-26 15:10:32.709197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.197 qpair failed and we were unable to recover it. 00:29:47.197 [2024-04-26 15:10:32.718978] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.197 [2024-04-26 15:10:32.719097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.197 [2024-04-26 15:10:32.719124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.197 [2024-04-26 15:10:32.719139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.197 [2024-04-26 15:10:32.719152] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.197 [2024-04-26 15:10:32.719181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.197 qpair failed and we were unable to recover it. 00:29:47.197 [2024-04-26 15:10:32.729103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.197 [2024-04-26 15:10:32.729244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.197 [2024-04-26 15:10:32.729270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.197 [2024-04-26 15:10:32.729286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.197 [2024-04-26 15:10:32.729304] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.197 [2024-04-26 15:10:32.729334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.197 qpair failed and we were unable to recover it. 00:29:47.197 [2024-04-26 15:10:32.739068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.197 [2024-04-26 15:10:32.739185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.198 [2024-04-26 15:10:32.739212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.198 [2024-04-26 15:10:32.739227] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.198 [2024-04-26 15:10:32.739240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.198 [2024-04-26 15:10:32.739269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.198 qpair failed and we were unable to recover it. 00:29:47.198 [2024-04-26 15:10:32.749099] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.198 [2024-04-26 15:10:32.749209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.198 [2024-04-26 15:10:32.749236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.198 [2024-04-26 15:10:32.749251] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.198 [2024-04-26 15:10:32.749264] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.198 [2024-04-26 15:10:32.749294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.198 qpair failed and we were unable to recover it. 00:29:47.198 [2024-04-26 15:10:32.759111] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.198 [2024-04-26 15:10:32.759257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.198 [2024-04-26 15:10:32.759284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.198 [2024-04-26 15:10:32.759299] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.198 [2024-04-26 15:10:32.759312] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.198 [2024-04-26 15:10:32.759341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.198 qpair failed and we were unable to recover it. 00:29:47.198 [2024-04-26 15:10:32.769153] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.198 [2024-04-26 15:10:32.769254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.198 [2024-04-26 15:10:32.769280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.198 [2024-04-26 15:10:32.769295] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.198 [2024-04-26 15:10:32.769323] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.198 [2024-04-26 15:10:32.769351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.198 qpair failed and we were unable to recover it. 00:29:47.198 [2024-04-26 15:10:32.779219] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.198 [2024-04-26 15:10:32.779362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.198 [2024-04-26 15:10:32.779388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.198 [2024-04-26 15:10:32.779402] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.198 [2024-04-26 15:10:32.779415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.198 [2024-04-26 15:10:32.779443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.198 qpair failed and we were unable to recover it. 00:29:47.198 [2024-04-26 15:10:32.789220] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.198 [2024-04-26 15:10:32.789324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.198 [2024-04-26 15:10:32.789366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.198 [2024-04-26 15:10:32.789380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.198 [2024-04-26 15:10:32.789392] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.198 [2024-04-26 15:10:32.789421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.198 qpair failed and we were unable to recover it. 00:29:47.198 [2024-04-26 15:10:32.799365] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.198 [2024-04-26 15:10:32.799480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.198 [2024-04-26 15:10:32.799507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.198 [2024-04-26 15:10:32.799522] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.198 [2024-04-26 15:10:32.799534] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.198 [2024-04-26 15:10:32.799562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.198 qpair failed and we were unable to recover it. 00:29:47.198 [2024-04-26 15:10:32.809268] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.198 [2024-04-26 15:10:32.809400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.198 [2024-04-26 15:10:32.809427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.198 [2024-04-26 15:10:32.809442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.198 [2024-04-26 15:10:32.809454] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.198 [2024-04-26 15:10:32.809483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.198 qpair failed and we were unable to recover it. 00:29:47.198 [2024-04-26 15:10:32.819380] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.198 [2024-04-26 15:10:32.819519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.198 [2024-04-26 15:10:32.819545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.198 [2024-04-26 15:10:32.819564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.198 [2024-04-26 15:10:32.819577] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.198 [2024-04-26 15:10:32.819605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.198 qpair failed and we were unable to recover it. 00:29:47.198 [2024-04-26 15:10:32.829411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.198 [2024-04-26 15:10:32.829513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.198 [2024-04-26 15:10:32.829538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.198 [2024-04-26 15:10:32.829553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.198 [2024-04-26 15:10:32.829565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.198 [2024-04-26 15:10:32.829593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.198 qpair failed and we were unable to recover it. 00:29:47.198 [2024-04-26 15:10:32.839368] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.198 [2024-04-26 15:10:32.839470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.198 [2024-04-26 15:10:32.839496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.198 [2024-04-26 15:10:32.839511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.198 [2024-04-26 15:10:32.839523] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.198 [2024-04-26 15:10:32.839551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.198 qpair failed and we were unable to recover it. 00:29:47.198 [2024-04-26 15:10:32.849407] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.198 [2024-04-26 15:10:32.849504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.198 [2024-04-26 15:10:32.849530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.198 [2024-04-26 15:10:32.849544] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.198 [2024-04-26 15:10:32.849557] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.198 [2024-04-26 15:10:32.849585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.198 qpair failed and we were unable to recover it. 00:29:47.198 [2024-04-26 15:10:32.859449] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.198 [2024-04-26 15:10:32.859549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.198 [2024-04-26 15:10:32.859574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.198 [2024-04-26 15:10:32.859589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.198 [2024-04-26 15:10:32.859601] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.198 [2024-04-26 15:10:32.859630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.198 qpair failed and we were unable to recover it. 00:29:47.198 [2024-04-26 15:10:32.869453] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.198 [2024-04-26 15:10:32.869561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.198 [2024-04-26 15:10:32.869587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.198 [2024-04-26 15:10:32.869602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.198 [2024-04-26 15:10:32.869615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.199 [2024-04-26 15:10:32.869643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.199 qpair failed and we were unable to recover it. 00:29:47.199 [2024-04-26 15:10:32.879483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.199 [2024-04-26 15:10:32.879605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.199 [2024-04-26 15:10:32.879631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.199 [2024-04-26 15:10:32.879645] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.199 [2024-04-26 15:10:32.879658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.199 [2024-04-26 15:10:32.879685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.199 qpair failed and we were unable to recover it. 00:29:47.199 [2024-04-26 15:10:32.889502] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.199 [2024-04-26 15:10:32.889605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.199 [2024-04-26 15:10:32.889629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.199 [2024-04-26 15:10:32.889643] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.199 [2024-04-26 15:10:32.889655] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.199 [2024-04-26 15:10:32.889682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.199 qpair failed and we were unable to recover it. 00:29:47.199 [2024-04-26 15:10:32.899636] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.199 [2024-04-26 15:10:32.899737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.199 [2024-04-26 15:10:32.899763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.199 [2024-04-26 15:10:32.899777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.199 [2024-04-26 15:10:32.899789] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.199 [2024-04-26 15:10:32.899818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.199 qpair failed and we were unable to recover it. 00:29:47.199 [2024-04-26 15:10:32.909559] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.199 [2024-04-26 15:10:32.909663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.199 [2024-04-26 15:10:32.909689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.199 [2024-04-26 15:10:32.909709] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.199 [2024-04-26 15:10:32.909722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.199 [2024-04-26 15:10:32.909750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.199 qpair failed and we were unable to recover it. 00:29:47.199 [2024-04-26 15:10:32.919637] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.199 [2024-04-26 15:10:32.919738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.199 [2024-04-26 15:10:32.919764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.199 [2024-04-26 15:10:32.919778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.199 [2024-04-26 15:10:32.919790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.199 [2024-04-26 15:10:32.919817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.199 qpair failed and we were unable to recover it. 00:29:47.199 [2024-04-26 15:10:32.929598] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.199 [2024-04-26 15:10:32.929733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.199 [2024-04-26 15:10:32.929758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.199 [2024-04-26 15:10:32.929773] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.199 [2024-04-26 15:10:32.929786] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.199 [2024-04-26 15:10:32.929813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.199 qpair failed and we were unable to recover it. 00:29:47.458 [2024-04-26 15:10:32.939689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.458 [2024-04-26 15:10:32.939831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.458 [2024-04-26 15:10:32.939857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.458 [2024-04-26 15:10:32.939871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.458 [2024-04-26 15:10:32.939884] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.458 [2024-04-26 15:10:32.939911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.458 qpair failed and we were unable to recover it. 00:29:47.458 [2024-04-26 15:10:32.949677] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.458 [2024-04-26 15:10:32.949803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.458 [2024-04-26 15:10:32.949829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.458 [2024-04-26 15:10:32.949844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.458 [2024-04-26 15:10:32.949856] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.458 [2024-04-26 15:10:32.949883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.458 qpair failed and we were unable to recover it. 00:29:47.458 [2024-04-26 15:10:32.959695] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.458 [2024-04-26 15:10:32.959825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.458 [2024-04-26 15:10:32.959852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.458 [2024-04-26 15:10:32.959867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.458 [2024-04-26 15:10:32.959879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.458 [2024-04-26 15:10:32.959907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.458 qpair failed and we were unable to recover it. 00:29:47.458 [2024-04-26 15:10:32.969754] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.458 [2024-04-26 15:10:32.969879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.458 [2024-04-26 15:10:32.969905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.458 [2024-04-26 15:10:32.969919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.458 [2024-04-26 15:10:32.969932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.458 [2024-04-26 15:10:32.969960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.458 qpair failed and we were unable to recover it. 00:29:47.458 [2024-04-26 15:10:32.979791] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.458 [2024-04-26 15:10:32.979937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.458 [2024-04-26 15:10:32.979963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.458 [2024-04-26 15:10:32.979977] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.458 [2024-04-26 15:10:32.979989] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.458 [2024-04-26 15:10:32.980039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.458 qpair failed and we were unable to recover it. 00:29:47.458 [2024-04-26 15:10:32.989787] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.458 [2024-04-26 15:10:32.989893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.458 [2024-04-26 15:10:32.989919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.458 [2024-04-26 15:10:32.989935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.458 [2024-04-26 15:10:32.989947] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.458 [2024-04-26 15:10:32.989975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.458 qpair failed and we were unable to recover it. 00:29:47.458 [2024-04-26 15:10:32.999798] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.458 [2024-04-26 15:10:32.999934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.458 [2024-04-26 15:10:32.999959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.458 [2024-04-26 15:10:32.999980] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.458 [2024-04-26 15:10:32.999993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.458 [2024-04-26 15:10:33.000043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.458 qpair failed and we were unable to recover it. 00:29:47.458 [2024-04-26 15:10:33.009906] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.458 [2024-04-26 15:10:33.010040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.458 [2024-04-26 15:10:33.010067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.458 [2024-04-26 15:10:33.010082] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.458 [2024-04-26 15:10:33.010095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.458 [2024-04-26 15:10:33.010124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.458 qpair failed and we were unable to recover it. 00:29:47.458 [2024-04-26 15:10:33.019903] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.458 [2024-04-26 15:10:33.020033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.458 [2024-04-26 15:10:33.020063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.458 [2024-04-26 15:10:33.020078] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.458 [2024-04-26 15:10:33.020091] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.458 [2024-04-26 15:10:33.020121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.458 qpair failed and we were unable to recover it. 00:29:47.458 [2024-04-26 15:10:33.029875] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.458 [2024-04-26 15:10:33.029984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.458 [2024-04-26 15:10:33.030034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.458 [2024-04-26 15:10:33.030060] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.458 [2024-04-26 15:10:33.030073] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.458 [2024-04-26 15:10:33.030102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.458 qpair failed and we were unable to recover it. 00:29:47.458 [2024-04-26 15:10:33.039952] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.458 [2024-04-26 15:10:33.040070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.458 [2024-04-26 15:10:33.040097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.458 [2024-04-26 15:10:33.040113] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.458 [2024-04-26 15:10:33.040125] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.458 [2024-04-26 15:10:33.040155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.458 qpair failed and we were unable to recover it. 00:29:47.458 [2024-04-26 15:10:33.050078] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.458 [2024-04-26 15:10:33.050214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.458 [2024-04-26 15:10:33.050239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.458 [2024-04-26 15:10:33.050255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.458 [2024-04-26 15:10:33.050267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.458 [2024-04-26 15:10:33.050296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.458 qpair failed and we were unable to recover it. 00:29:47.458 [2024-04-26 15:10:33.060043] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.458 [2024-04-26 15:10:33.060153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.458 [2024-04-26 15:10:33.060178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.459 [2024-04-26 15:10:33.060193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.459 [2024-04-26 15:10:33.060206] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a45b0 00:29:47.459 [2024-04-26 15:10:33.060235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.459 qpair failed and we were unable to recover it. 00:29:47.459 [2024-04-26 15:10:33.060360] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:47.459 A controller has encountered a failure and is being reset. 00:29:47.459 Controller properly reset. 00:29:47.716 Initializing NVMe Controllers 00:29:47.716 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:47.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:47.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:47.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:47.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:47.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:47.716 Initialization complete. Launching workers. 00:29:47.716 Starting thread on core 1 00:29:47.716 Starting thread on core 2 00:29:47.716 Starting thread on core 3 00:29:47.716 Starting thread on core 0 00:29:47.716 15:10:33 -- host/target_disconnect.sh@59 -- # sync 00:29:47.716 00:29:47.716 real 0m10.832s 00:29:47.716 user 0m18.668s 00:29:47.716 sys 0m5.205s 00:29:47.716 15:10:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:47.716 15:10:33 -- common/autotest_common.sh@10 -- # set +x 00:29:47.716 ************************************ 00:29:47.716 END TEST nvmf_target_disconnect_tc2 00:29:47.716 ************************************ 00:29:47.716 15:10:33 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:29:47.716 15:10:33 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:47.716 15:10:33 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:29:47.716 15:10:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:47.716 15:10:33 -- nvmf/common.sh@117 -- # sync 00:29:47.716 15:10:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:47.716 15:10:33 -- nvmf/common.sh@120 -- # set +e 00:29:47.716 15:10:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:47.716 15:10:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:47.716 rmmod nvme_tcp 00:29:47.716 rmmod nvme_fabrics 00:29:47.716 rmmod nvme_keyring 00:29:47.716 15:10:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:47.716 15:10:33 -- nvmf/common.sh@124 -- # set -e 00:29:47.716 15:10:33 -- nvmf/common.sh@125 -- # return 0 00:29:47.716 15:10:33 -- nvmf/common.sh@478 -- # '[' -n 3907469 ']' 00:29:47.716 15:10:33 -- nvmf/common.sh@479 -- # killprocess 3907469 00:29:47.716 15:10:33 -- common/autotest_common.sh@936 -- # '[' -z 3907469 ']' 00:29:47.716 15:10:33 -- common/autotest_common.sh@940 -- # kill -0 3907469 00:29:47.716 15:10:33 -- common/autotest_common.sh@941 -- # uname 00:29:47.716 15:10:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:47.716 15:10:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3907469 00:29:47.716 15:10:33 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:29:47.716 15:10:33 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:29:47.716 15:10:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3907469' 00:29:47.716 killing process with pid 3907469 00:29:47.716 15:10:33 -- common/autotest_common.sh@955 -- # kill 3907469 00:29:47.716 15:10:33 -- common/autotest_common.sh@960 -- # wait 3907469 00:29:47.975 15:10:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:47.975 15:10:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:47.975 15:10:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:47.975 15:10:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:47.975 15:10:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:47.975 15:10:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.975 15:10:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:47.975 15:10:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.507 15:10:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:50.507 00:29:50.507 real 0m15.804s 00:29:50.507 user 0m45.290s 00:29:50.507 sys 0m7.242s 00:29:50.507 15:10:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:50.507 15:10:35 -- common/autotest_common.sh@10 -- # set +x 00:29:50.507 ************************************ 00:29:50.507 END TEST nvmf_target_disconnect 00:29:50.507 ************************************ 00:29:50.507 15:10:35 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:29:50.507 15:10:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:50.507 15:10:35 -- common/autotest_common.sh@10 -- # set +x 00:29:50.507 15:10:35 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:29:50.507 00:29:50.507 real 22m54.650s 00:29:50.507 user 63m5.766s 00:29:50.507 sys 5m44.003s 00:29:50.507 15:10:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:50.507 15:10:35 -- common/autotest_common.sh@10 -- # set +x 00:29:50.507 ************************************ 00:29:50.507 END TEST nvmf_tcp 00:29:50.507 ************************************ 00:29:50.507 15:10:35 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:29:50.507 15:10:35 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:50.507 15:10:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:50.507 15:10:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:50.507 15:10:35 -- common/autotest_common.sh@10 -- # set +x 00:29:50.507 ************************************ 00:29:50.507 START TEST spdkcli_nvmf_tcp 00:29:50.507 ************************************ 00:29:50.507 15:10:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:50.507 * Looking for test storage... 00:29:50.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:50.507 15:10:35 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:50.507 15:10:35 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:50.507 15:10:35 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:50.507 15:10:35 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.507 15:10:35 -- nvmf/common.sh@7 -- # uname -s 00:29:50.507 15:10:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.507 15:10:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.507 15:10:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.507 15:10:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.507 15:10:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.507 15:10:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.507 15:10:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.507 15:10:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.507 15:10:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.507 15:10:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.507 15:10:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:50.507 15:10:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:50.507 15:10:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.507 15:10:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.507 15:10:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.507 15:10:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.507 15:10:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.507 15:10:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.507 15:10:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.507 15:10:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.507 15:10:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.507 15:10:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.507 15:10:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.507 15:10:35 -- paths/export.sh@5 -- # export PATH 00:29:50.508 15:10:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.508 15:10:35 -- nvmf/common.sh@47 -- # : 0 00:29:50.508 15:10:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:50.508 15:10:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:50.508 15:10:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.508 15:10:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.508 15:10:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.508 15:10:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:50.508 15:10:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:50.508 15:10:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:50.508 15:10:35 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:50.508 15:10:35 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:50.508 15:10:35 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:50.508 15:10:35 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:50.508 15:10:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:50.508 15:10:35 -- common/autotest_common.sh@10 -- # set +x 00:29:50.508 15:10:35 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:50.508 15:10:35 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3908656 00:29:50.508 15:10:35 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:50.508 15:10:35 -- spdkcli/common.sh@34 -- # waitforlisten 3908656 00:29:50.508 15:10:35 -- common/autotest_common.sh@817 -- # '[' -z 3908656 ']' 00:29:50.508 15:10:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.508 15:10:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:50.508 15:10:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.508 15:10:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:50.508 15:10:35 -- common/autotest_common.sh@10 -- # set +x 00:29:50.508 [2024-04-26 15:10:35.928071] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:29:50.508 [2024-04-26 15:10:35.928166] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3908656 ] 00:29:50.508 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.508 [2024-04-26 15:10:35.961611] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:50.508 [2024-04-26 15:10:35.991385] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:50.508 [2024-04-26 15:10:36.078395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.508 [2024-04-26 15:10:36.078399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.508 15:10:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:50.508 15:10:36 -- common/autotest_common.sh@850 -- # return 0 00:29:50.508 15:10:36 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:50.508 15:10:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:50.508 15:10:36 -- common/autotest_common.sh@10 -- # set +x 00:29:50.508 15:10:36 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:50.508 15:10:36 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:50.508 15:10:36 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:50.508 15:10:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:50.508 15:10:36 -- common/autotest_common.sh@10 -- # set +x 00:29:50.508 15:10:36 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:50.508 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:50.508 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:50.508 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:50.508 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:50.508 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:50.508 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:50.508 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:50.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:50.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:50.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:50.508 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:50.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:50.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:50.508 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:50.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:50.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:50.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:50.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:50.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:50.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:50.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:50.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:50.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:50.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:50.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:50.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:50.508 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:50.508 ' 00:29:51.075 [2024-04-26 15:10:36.596277] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:53.607 [2024-04-26 15:10:38.748388] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.544 [2024-04-26 15:10:39.984713] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:57.074 [2024-04-26 15:10:42.275893] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:58.978 [2024-04-26 15:10:44.238242] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:00.357 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:00.357 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:00.357 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:00.357 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:00.357 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:00.357 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:00.357 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:00.357 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:00.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:00.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:00.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:00.357 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:00.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:00.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:00.357 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:00.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:00.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:00.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:00.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:00.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:00.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:00.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:00.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:00.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:00.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:00.357 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:00.358 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:00.358 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:00.358 15:10:45 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:00.358 15:10:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:00.358 15:10:45 -- common/autotest_common.sh@10 -- # set +x 00:30:00.358 15:10:45 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:00.358 15:10:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:00.358 15:10:45 -- common/autotest_common.sh@10 -- # set +x 00:30:00.358 15:10:45 -- spdkcli/nvmf.sh@69 -- # check_match 00:30:00.358 15:10:45 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:00.617 15:10:46 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:00.617 15:10:46 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:00.617 15:10:46 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:00.617 15:10:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:00.617 15:10:46 -- common/autotest_common.sh@10 -- # set +x 00:30:00.876 15:10:46 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:00.876 15:10:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:00.876 15:10:46 -- common/autotest_common.sh@10 -- # set +x 00:30:00.876 15:10:46 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:00.876 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:00.876 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:00.876 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:00.876 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:00.876 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:00.876 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:00.876 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:00.876 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:00.876 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:00.876 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:00.876 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:00.876 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:00.876 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:00.876 ' 00:30:06.171 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:06.171 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:06.171 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:06.171 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:06.171 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:06.171 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:06.171 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:06.171 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:06.171 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:06.171 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:06.171 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:06.171 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:06.171 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:06.171 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:06.171 15:10:51 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:06.171 15:10:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:06.171 15:10:51 -- common/autotest_common.sh@10 -- # set +x 00:30:06.171 15:10:51 -- spdkcli/nvmf.sh@90 -- # killprocess 3908656 00:30:06.171 15:10:51 -- common/autotest_common.sh@936 -- # '[' -z 3908656 ']' 00:30:06.171 15:10:51 -- common/autotest_common.sh@940 -- # kill -0 3908656 00:30:06.171 15:10:51 -- common/autotest_common.sh@941 -- # uname 00:30:06.171 15:10:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:06.171 15:10:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3908656 00:30:06.171 15:10:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:06.171 15:10:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:06.171 15:10:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3908656' 00:30:06.171 killing process with pid 3908656 00:30:06.171 15:10:51 -- common/autotest_common.sh@955 -- # kill 3908656 00:30:06.171 [2024-04-26 15:10:51.689432] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:06.171 15:10:51 -- common/autotest_common.sh@960 -- # wait 3908656 00:30:06.429 15:10:51 -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:06.429 15:10:51 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:06.429 15:10:51 -- spdkcli/common.sh@13 -- # '[' -n 3908656 ']' 00:30:06.429 15:10:51 -- spdkcli/common.sh@14 -- # killprocess 3908656 00:30:06.429 15:10:51 -- common/autotest_common.sh@936 -- # '[' -z 3908656 ']' 00:30:06.429 15:10:51 -- common/autotest_common.sh@940 -- # kill -0 3908656 00:30:06.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3908656) - No such process 00:30:06.429 15:10:51 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3908656 is not found' 00:30:06.429 Process with pid 3908656 is not found 00:30:06.429 15:10:51 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:06.429 15:10:51 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:06.429 15:10:51 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:06.429 00:30:06.429 real 0m16.110s 00:30:06.429 user 0m34.137s 00:30:06.429 sys 0m0.794s 00:30:06.429 15:10:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:06.429 15:10:51 -- common/autotest_common.sh@10 -- # set +x 00:30:06.429 ************************************ 00:30:06.429 END TEST spdkcli_nvmf_tcp 00:30:06.429 ************************************ 00:30:06.429 15:10:51 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:06.429 15:10:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:06.429 15:10:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:06.429 15:10:51 -- common/autotest_common.sh@10 -- # set +x 00:30:06.429 ************************************ 00:30:06.429 START TEST nvmf_identify_passthru 00:30:06.429 ************************************ 00:30:06.429 15:10:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:06.429 * Looking for test storage... 00:30:06.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:06.429 15:10:52 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.429 15:10:52 -- nvmf/common.sh@7 -- # uname -s 00:30:06.429 15:10:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.429 15:10:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.429 15:10:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.429 15:10:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.429 15:10:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.429 15:10:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.429 15:10:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.429 15:10:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.429 15:10:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.429 15:10:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.429 15:10:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:06.429 15:10:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:06.429 15:10:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.429 15:10:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.429 15:10:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.429 15:10:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.429 15:10:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.429 15:10:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.429 15:10:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.429 15:10:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.429 15:10:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.429 15:10:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.429 15:10:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.429 15:10:52 -- paths/export.sh@5 -- # export PATH 00:30:06.429 15:10:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.429 15:10:52 -- nvmf/common.sh@47 -- # : 0 00:30:06.429 15:10:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:06.429 15:10:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:06.429 15:10:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.429 15:10:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.429 15:10:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.429 15:10:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:06.429 15:10:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:06.429 15:10:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:06.429 15:10:52 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.429 15:10:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.429 15:10:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.429 15:10:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.429 15:10:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.429 15:10:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.429 15:10:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.429 15:10:52 -- paths/export.sh@5 -- # export PATH 00:30:06.429 15:10:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.429 15:10:52 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:06.429 15:10:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:06.429 15:10:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.429 15:10:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:06.429 15:10:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:06.429 15:10:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:06.429 15:10:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.429 15:10:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:06.429 15:10:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.429 15:10:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:30:06.429 15:10:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:30:06.429 15:10:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:06.429 15:10:52 -- common/autotest_common.sh@10 -- # set +x 00:30:08.326 15:10:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:08.326 15:10:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:08.326 15:10:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:08.326 15:10:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:08.326 15:10:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:08.326 15:10:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:08.326 15:10:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:08.326 15:10:54 -- nvmf/common.sh@295 -- # net_devs=() 00:30:08.326 15:10:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:08.326 15:10:54 -- nvmf/common.sh@296 -- # e810=() 00:30:08.326 15:10:54 -- nvmf/common.sh@296 -- # local -ga e810 00:30:08.326 15:10:54 -- nvmf/common.sh@297 -- # x722=() 00:30:08.326 15:10:54 -- nvmf/common.sh@297 -- # local -ga x722 00:30:08.326 15:10:54 -- nvmf/common.sh@298 -- # mlx=() 00:30:08.326 15:10:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:08.326 15:10:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.326 15:10:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.326 15:10:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.326 15:10:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.326 15:10:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.326 15:10:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.326 15:10:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.326 15:10:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.326 15:10:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.326 15:10:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.326 15:10:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.326 15:10:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:08.326 15:10:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:08.326 15:10:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:08.326 15:10:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:08.326 15:10:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:08.326 15:10:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:08.326 15:10:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:08.326 15:10:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:08.326 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:08.326 15:10:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:08.326 15:10:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:08.326 15:10:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.326 15:10:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.326 15:10:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:08.326 15:10:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:08.326 15:10:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:08.326 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:08.326 15:10:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:08.326 15:10:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:08.326 15:10:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.326 15:10:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.326 15:10:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:08.326 15:10:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:08.326 15:10:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:08.326 15:10:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:08.326 15:10:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:08.326 15:10:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.326 15:10:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:08.326 15:10:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.326 15:10:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:08.326 Found net devices under 0000:84:00.0: cvl_0_0 00:30:08.326 15:10:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.326 15:10:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:08.326 15:10:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.326 15:10:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:08.326 15:10:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.326 15:10:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:08.326 Found net devices under 0000:84:00.1: cvl_0_1 00:30:08.326 15:10:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.326 15:10:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:30:08.326 15:10:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:30:08.326 15:10:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:30:08.326 15:10:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:30:08.326 15:10:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:30:08.326 15:10:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.326 15:10:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.326 15:10:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.326 15:10:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:08.326 15:10:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.326 15:10:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.326 15:10:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:08.326 15:10:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.326 15:10:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.326 15:10:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:08.585 15:10:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:08.585 15:10:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.585 15:10:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.585 15:10:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.585 15:10:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.585 15:10:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:08.585 15:10:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.585 15:10:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.585 15:10:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.585 15:10:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:08.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:30:08.585 00:30:08.585 --- 10.0.0.2 ping statistics --- 00:30:08.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.585 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:30:08.585 15:10:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:30:08.585 00:30:08.585 --- 10.0.0.1 ping statistics --- 00:30:08.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.585 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:30:08.585 15:10:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.585 15:10:54 -- nvmf/common.sh@411 -- # return 0 00:30:08.585 15:10:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:08.585 15:10:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.585 15:10:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:08.585 15:10:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:08.585 15:10:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.585 15:10:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:08.585 15:10:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:08.585 15:10:54 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:08.585 15:10:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:08.585 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:30:08.585 15:10:54 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:08.585 15:10:54 -- common/autotest_common.sh@1510 -- # bdfs=() 00:30:08.585 15:10:54 -- common/autotest_common.sh@1510 -- # local bdfs 00:30:08.585 15:10:54 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:30:08.585 15:10:54 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:30:08.585 15:10:54 -- common/autotest_common.sh@1499 -- # bdfs=() 00:30:08.585 15:10:54 -- common/autotest_common.sh@1499 -- # local bdfs 00:30:08.585 15:10:54 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:08.585 15:10:54 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:08.586 15:10:54 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:30:08.586 15:10:54 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:30:08.586 15:10:54 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:82:00.0 00:30:08.586 15:10:54 -- common/autotest_common.sh@1513 -- # echo 0000:82:00.0 00:30:08.586 15:10:54 -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:30:08.586 15:10:54 -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:30:08.586 15:10:54 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:30:08.586 15:10:54 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:08.586 15:10:54 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:08.844 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.024 15:10:58 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:30:13.024 15:10:58 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:30:13.024 15:10:58 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:13.024 15:10:58 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:13.025 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.208 15:11:02 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:17.208 15:11:02 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:17.208 15:11:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:17.208 15:11:02 -- common/autotest_common.sh@10 -- # set +x 00:30:17.208 15:11:02 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:17.208 15:11:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:17.208 15:11:02 -- common/autotest_common.sh@10 -- # set +x 00:30:17.208 15:11:02 -- target/identify_passthru.sh@31 -- # nvmfpid=3913247 00:30:17.208 15:11:02 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:17.208 15:11:02 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:17.208 15:11:02 -- target/identify_passthru.sh@35 -- # waitforlisten 3913247 00:30:17.208 15:11:02 -- common/autotest_common.sh@817 -- # '[' -z 3913247 ']' 00:30:17.208 15:11:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.208 15:11:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:17.208 15:11:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.208 15:11:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:17.208 15:11:02 -- common/autotest_common.sh@10 -- # set +x 00:30:17.208 [2024-04-26 15:11:02.753037] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:30:17.208 [2024-04-26 15:11:02.753132] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:17.208 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.208 [2024-04-26 15:11:02.797097] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:17.208 [2024-04-26 15:11:02.824747] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:17.208 [2024-04-26 15:11:02.912911] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:17.208 [2024-04-26 15:11:02.912983] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:17.208 [2024-04-26 15:11:02.913012] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:17.208 [2024-04-26 15:11:02.913032] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:17.208 [2024-04-26 15:11:02.913043] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:17.208 [2024-04-26 15:11:02.913102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.208 [2024-04-26 15:11:02.913427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:17.208 [2024-04-26 15:11:02.913487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:17.208 [2024-04-26 15:11:02.913490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.466 15:11:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:17.466 15:11:02 -- common/autotest_common.sh@850 -- # return 0 00:30:17.466 15:11:02 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:17.466 15:11:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.466 15:11:02 -- common/autotest_common.sh@10 -- # set +x 00:30:17.466 INFO: Log level set to 20 00:30:17.466 INFO: Requests: 00:30:17.466 { 00:30:17.466 "jsonrpc": "2.0", 00:30:17.466 "method": "nvmf_set_config", 00:30:17.466 "id": 1, 00:30:17.466 "params": { 00:30:17.466 "admin_cmd_passthru": { 00:30:17.466 "identify_ctrlr": true 00:30:17.466 } 00:30:17.466 } 00:30:17.466 } 00:30:17.466 00:30:17.466 INFO: response: 00:30:17.466 { 00:30:17.466 "jsonrpc": "2.0", 00:30:17.466 "id": 1, 00:30:17.466 "result": true 00:30:17.466 } 00:30:17.466 00:30:17.466 15:11:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.466 15:11:02 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:17.466 15:11:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.466 15:11:02 -- common/autotest_common.sh@10 -- # set +x 00:30:17.466 INFO: Setting log level to 20 00:30:17.466 INFO: Setting log level to 20 00:30:17.466 INFO: Log level set to 20 00:30:17.466 INFO: Log level set to 20 00:30:17.466 INFO: Requests: 00:30:17.466 { 00:30:17.466 "jsonrpc": "2.0", 00:30:17.466 "method": "framework_start_init", 00:30:17.466 "id": 1 00:30:17.466 } 00:30:17.466 00:30:17.466 INFO: Requests: 00:30:17.466 { 00:30:17.466 "jsonrpc": "2.0", 00:30:17.466 "method": "framework_start_init", 00:30:17.466 "id": 1 00:30:17.466 } 00:30:17.466 00:30:17.466 [2024-04-26 15:11:03.058260] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:17.466 INFO: response: 00:30:17.466 { 00:30:17.466 "jsonrpc": "2.0", 00:30:17.466 "id": 1, 00:30:17.466 "result": true 00:30:17.466 } 00:30:17.466 00:30:17.466 INFO: response: 00:30:17.466 { 00:30:17.466 "jsonrpc": "2.0", 00:30:17.466 "id": 1, 00:30:17.466 "result": true 00:30:17.466 } 00:30:17.466 00:30:17.466 15:11:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.466 15:11:03 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:17.466 15:11:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.466 15:11:03 -- common/autotest_common.sh@10 -- # set +x 00:30:17.466 INFO: Setting log level to 40 00:30:17.466 INFO: Setting log level to 40 00:30:17.466 INFO: Setting log level to 40 00:30:17.466 [2024-04-26 15:11:03.068271] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.466 15:11:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.466 15:11:03 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:17.466 15:11:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:17.466 15:11:03 -- common/autotest_common.sh@10 -- # set +x 00:30:17.466 15:11:03 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:30:17.466 15:11:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.466 15:11:03 -- common/autotest_common.sh@10 -- # set +x 00:30:20.745 Nvme0n1 00:30:20.745 15:11:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.745 15:11:05 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:20.745 15:11:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.745 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:30:20.745 15:11:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.745 15:11:05 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:20.745 15:11:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.745 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:30:20.745 15:11:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.745 15:11:05 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:20.745 15:11:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.745 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:30:20.745 [2024-04-26 15:11:05.955383] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.745 15:11:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.745 15:11:05 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:20.745 15:11:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.745 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:30:20.745 [2024-04-26 15:11:05.963100] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:20.745 [ 00:30:20.745 { 00:30:20.745 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:20.745 "subtype": "Discovery", 00:30:20.745 "listen_addresses": [], 00:30:20.745 "allow_any_host": true, 00:30:20.745 "hosts": [] 00:30:20.745 }, 00:30:20.745 { 00:30:20.745 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:20.745 "subtype": "NVMe", 00:30:20.745 "listen_addresses": [ 00:30:20.745 { 00:30:20.745 "transport": "TCP", 00:30:20.745 "trtype": "TCP", 00:30:20.745 "adrfam": "IPv4", 00:30:20.745 "traddr": "10.0.0.2", 00:30:20.745 "trsvcid": "4420" 00:30:20.745 } 00:30:20.745 ], 00:30:20.745 "allow_any_host": true, 00:30:20.745 "hosts": [], 00:30:20.745 "serial_number": "SPDK00000000000001", 00:30:20.745 "model_number": "SPDK bdev Controller", 00:30:20.745 "max_namespaces": 1, 00:30:20.745 "min_cntlid": 1, 00:30:20.745 "max_cntlid": 65519, 00:30:20.745 "namespaces": [ 00:30:20.745 { 00:30:20.745 "nsid": 1, 00:30:20.745 "bdev_name": "Nvme0n1", 00:30:20.745 "name": "Nvme0n1", 00:30:20.745 "nguid": "D103E06CFDF547B1A7E23B94B43FE92C", 00:30:20.745 "uuid": "d103e06c-fdf5-47b1-a7e2-3b94b43fe92c" 00:30:20.745 } 00:30:20.745 ] 00:30:20.745 } 00:30:20.745 ] 00:30:20.745 15:11:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.745 15:11:05 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:20.745 15:11:05 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:20.745 15:11:05 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:20.745 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.745 15:11:06 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:30:20.745 15:11:06 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:20.745 15:11:06 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:20.745 15:11:06 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:20.745 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.745 15:11:06 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:20.745 15:11:06 -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:30:20.745 15:11:06 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:20.745 15:11:06 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:20.745 15:11:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.745 15:11:06 -- common/autotest_common.sh@10 -- # set +x 00:30:20.745 15:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.745 15:11:06 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:20.745 15:11:06 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:20.745 15:11:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:20.745 15:11:06 -- nvmf/common.sh@117 -- # sync 00:30:20.745 15:11:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:20.745 15:11:06 -- nvmf/common.sh@120 -- # set +e 00:30:20.745 15:11:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:20.745 15:11:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:20.745 rmmod nvme_tcp 00:30:20.745 rmmod nvme_fabrics 00:30:20.745 rmmod nvme_keyring 00:30:20.745 15:11:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:20.745 15:11:06 -- nvmf/common.sh@124 -- # set -e 00:30:20.745 15:11:06 -- nvmf/common.sh@125 -- # return 0 00:30:20.745 15:11:06 -- nvmf/common.sh@478 -- # '[' -n 3913247 ']' 00:30:20.745 15:11:06 -- nvmf/common.sh@479 -- # killprocess 3913247 00:30:20.745 15:11:06 -- common/autotest_common.sh@936 -- # '[' -z 3913247 ']' 00:30:20.745 15:11:06 -- common/autotest_common.sh@940 -- # kill -0 3913247 00:30:20.745 15:11:06 -- common/autotest_common.sh@941 -- # uname 00:30:20.745 15:11:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:20.745 15:11:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3913247 00:30:20.745 15:11:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:20.745 15:11:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:20.745 15:11:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3913247' 00:30:20.745 killing process with pid 3913247 00:30:20.745 15:11:06 -- common/autotest_common.sh@955 -- # kill 3913247 00:30:20.745 [2024-04-26 15:11:06.464933] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:20.745 15:11:06 -- common/autotest_common.sh@960 -- # wait 3913247 00:30:22.645 15:11:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:22.645 15:11:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:22.645 15:11:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:22.645 15:11:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:22.645 15:11:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:22.645 15:11:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.645 15:11:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:22.645 15:11:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.556 15:11:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:24.556 00:30:24.556 real 0m18.012s 00:30:24.556 user 0m26.817s 00:30:24.556 sys 0m2.313s 00:30:24.556 15:11:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:24.556 15:11:10 -- common/autotest_common.sh@10 -- # set +x 00:30:24.556 ************************************ 00:30:24.556 END TEST nvmf_identify_passthru 00:30:24.556 ************************************ 00:30:24.556 15:11:10 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:24.556 15:11:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:24.556 15:11:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:24.557 15:11:10 -- common/autotest_common.sh@10 -- # set +x 00:30:24.557 ************************************ 00:30:24.557 START TEST nvmf_dif 00:30:24.557 ************************************ 00:30:24.557 15:11:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:24.557 * Looking for test storage... 00:30:24.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:24.557 15:11:10 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:24.557 15:11:10 -- nvmf/common.sh@7 -- # uname -s 00:30:24.557 15:11:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:24.557 15:11:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:24.557 15:11:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:24.557 15:11:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:24.557 15:11:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:24.557 15:11:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:24.557 15:11:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:24.557 15:11:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:24.557 15:11:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:24.557 15:11:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:24.557 15:11:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:24.557 15:11:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:24.557 15:11:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:24.557 15:11:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:24.557 15:11:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:24.557 15:11:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:24.557 15:11:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:24.557 15:11:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.557 15:11:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.557 15:11:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.557 15:11:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.557 15:11:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.557 15:11:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.557 15:11:10 -- paths/export.sh@5 -- # export PATH 00:30:24.557 15:11:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.557 15:11:10 -- nvmf/common.sh@47 -- # : 0 00:30:24.557 15:11:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:24.557 15:11:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:24.557 15:11:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:24.557 15:11:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:24.557 15:11:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:24.557 15:11:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:24.557 15:11:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:24.557 15:11:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:24.557 15:11:10 -- target/dif.sh@15 -- # NULL_META=16 00:30:24.557 15:11:10 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:24.557 15:11:10 -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:24.557 15:11:10 -- target/dif.sh@15 -- # NULL_DIF=1 00:30:24.557 15:11:10 -- target/dif.sh@135 -- # nvmftestinit 00:30:24.557 15:11:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:24.557 15:11:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:24.557 15:11:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:24.557 15:11:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:24.557 15:11:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:24.557 15:11:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.557 15:11:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:24.557 15:11:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.557 15:11:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:30:24.557 15:11:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:30:24.557 15:11:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:24.557 15:11:10 -- common/autotest_common.sh@10 -- # set +x 00:30:26.458 15:11:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:26.458 15:11:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:26.458 15:11:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:26.458 15:11:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:26.458 15:11:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:26.458 15:11:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:26.458 15:11:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:26.458 15:11:12 -- nvmf/common.sh@295 -- # net_devs=() 00:30:26.458 15:11:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:26.458 15:11:12 -- nvmf/common.sh@296 -- # e810=() 00:30:26.458 15:11:12 -- nvmf/common.sh@296 -- # local -ga e810 00:30:26.458 15:11:12 -- nvmf/common.sh@297 -- # x722=() 00:30:26.458 15:11:12 -- nvmf/common.sh@297 -- # local -ga x722 00:30:26.458 15:11:12 -- nvmf/common.sh@298 -- # mlx=() 00:30:26.458 15:11:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:26.458 15:11:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.458 15:11:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.458 15:11:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.458 15:11:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.458 15:11:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.458 15:11:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.458 15:11:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.458 15:11:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.458 15:11:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.458 15:11:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.458 15:11:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.458 15:11:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:26.458 15:11:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:26.458 15:11:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:26.458 15:11:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:26.458 15:11:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:26.458 15:11:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:26.458 15:11:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:26.458 15:11:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:26.458 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:26.458 15:11:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:26.458 15:11:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:26.458 15:11:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.458 15:11:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.458 15:11:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:26.458 15:11:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:26.458 15:11:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:26.458 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:26.458 15:11:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:26.458 15:11:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:26.458 15:11:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.458 15:11:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.458 15:11:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:26.458 15:11:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:26.458 15:11:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:26.458 15:11:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:26.458 15:11:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:26.458 15:11:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.458 15:11:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:26.458 15:11:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.458 15:11:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:26.458 Found net devices under 0000:84:00.0: cvl_0_0 00:30:26.458 15:11:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.458 15:11:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:26.458 15:11:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.458 15:11:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:26.458 15:11:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.458 15:11:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:26.458 Found net devices under 0000:84:00.1: cvl_0_1 00:30:26.458 15:11:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.458 15:11:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:30:26.458 15:11:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:30:26.458 15:11:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:30:26.458 15:11:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:30:26.458 15:11:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:30:26.458 15:11:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.458 15:11:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.458 15:11:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:26.458 15:11:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:26.458 15:11:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:26.458 15:11:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:26.458 15:11:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:26.458 15:11:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:26.458 15:11:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.458 15:11:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:26.458 15:11:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:26.458 15:11:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:26.458 15:11:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.717 15:11:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.717 15:11:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.717 15:11:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:26.717 15:11:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.717 15:11:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.717 15:11:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.717 15:11:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:26.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:30:26.717 00:30:26.717 --- 10.0.0.2 ping statistics --- 00:30:26.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.717 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:30:26.717 15:11:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:30:26.717 00:30:26.717 --- 10.0.0.1 ping statistics --- 00:30:26.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.717 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:30:26.717 15:11:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.717 15:11:12 -- nvmf/common.sh@411 -- # return 0 00:30:26.717 15:11:12 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:30:26.717 15:11:12 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:27.650 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:27.650 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:27.650 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:27.650 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:27.650 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:27.650 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:27.650 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:27.650 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:27.650 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:27.650 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:27.650 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:27.650 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:27.650 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:27.650 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:27.650 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:27.650 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:27.650 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:27.908 15:11:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:27.908 15:11:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:27.908 15:11:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:27.908 15:11:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:27.908 15:11:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:27.908 15:11:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:27.908 15:11:13 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:27.908 15:11:13 -- target/dif.sh@137 -- # nvmfappstart 00:30:27.908 15:11:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:27.908 15:11:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:27.908 15:11:13 -- common/autotest_common.sh@10 -- # set +x 00:30:27.908 15:11:13 -- nvmf/common.sh@470 -- # nvmfpid=3916432 00:30:27.908 15:11:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:27.908 15:11:13 -- nvmf/common.sh@471 -- # waitforlisten 3916432 00:30:27.908 15:11:13 -- common/autotest_common.sh@817 -- # '[' -z 3916432 ']' 00:30:27.908 15:11:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.908 15:11:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:27.908 15:11:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.908 15:11:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:27.908 15:11:13 -- common/autotest_common.sh@10 -- # set +x 00:30:27.908 [2024-04-26 15:11:13.507603] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:30:27.908 [2024-04-26 15:11:13.507686] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:27.908 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.908 [2024-04-26 15:11:13.547707] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:27.908 [2024-04-26 15:11:13.574428] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.166 [2024-04-26 15:11:13.662590] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.166 [2024-04-26 15:11:13.662650] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.166 [2024-04-26 15:11:13.662664] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.166 [2024-04-26 15:11:13.662675] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.166 [2024-04-26 15:11:13.662685] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.166 [2024-04-26 15:11:13.662724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.166 15:11:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:28.166 15:11:13 -- common/autotest_common.sh@850 -- # return 0 00:30:28.166 15:11:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:28.166 15:11:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:28.166 15:11:13 -- common/autotest_common.sh@10 -- # set +x 00:30:28.166 15:11:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.166 15:11:13 -- target/dif.sh@139 -- # create_transport 00:30:28.166 15:11:13 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:28.166 15:11:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.166 15:11:13 -- common/autotest_common.sh@10 -- # set +x 00:30:28.166 [2024-04-26 15:11:13.808474] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:28.166 15:11:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.166 15:11:13 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:28.166 15:11:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:28.166 15:11:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:28.166 15:11:13 -- common/autotest_common.sh@10 -- # set +x 00:30:28.166 ************************************ 00:30:28.166 START TEST fio_dif_1_default 00:30:28.166 ************************************ 00:30:28.166 15:11:13 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:30:28.166 15:11:13 -- target/dif.sh@86 -- # create_subsystems 0 00:30:28.166 15:11:13 -- target/dif.sh@28 -- # local sub 00:30:28.166 15:11:13 -- target/dif.sh@30 -- # for sub in "$@" 00:30:28.166 15:11:13 -- target/dif.sh@31 -- # create_subsystem 0 00:30:28.166 15:11:13 -- target/dif.sh@18 -- # local sub_id=0 00:30:28.166 15:11:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:28.166 15:11:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.166 15:11:13 -- common/autotest_common.sh@10 -- # set +x 00:30:28.424 bdev_null0 00:30:28.424 15:11:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.424 15:11:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:28.424 15:11:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.424 15:11:13 -- common/autotest_common.sh@10 -- # set +x 00:30:28.424 15:11:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.424 15:11:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:28.424 15:11:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.424 15:11:13 -- common/autotest_common.sh@10 -- # set +x 00:30:28.424 15:11:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.424 15:11:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:28.424 15:11:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.424 15:11:13 -- common/autotest_common.sh@10 -- # set +x 00:30:28.424 [2024-04-26 15:11:13.932971] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:28.424 15:11:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.424 15:11:13 -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:28.424 15:11:13 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:28.424 15:11:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:28.424 15:11:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:28.424 15:11:13 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:28.424 15:11:13 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:28.424 15:11:13 -- target/dif.sh@82 -- # gen_fio_conf 00:30:28.424 15:11:13 -- nvmf/common.sh@521 -- # config=() 00:30:28.424 15:11:13 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:28.424 15:11:13 -- nvmf/common.sh@521 -- # local subsystem config 00:30:28.424 15:11:13 -- target/dif.sh@54 -- # local file 00:30:28.424 15:11:13 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:28.424 15:11:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:28.424 15:11:13 -- target/dif.sh@56 -- # cat 00:30:28.424 15:11:13 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:28.424 15:11:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:28.424 { 00:30:28.424 "params": { 00:30:28.424 "name": "Nvme$subsystem", 00:30:28.424 "trtype": "$TEST_TRANSPORT", 00:30:28.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.424 "adrfam": "ipv4", 00:30:28.424 "trsvcid": "$NVMF_PORT", 00:30:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.424 "hdgst": ${hdgst:-false}, 00:30:28.424 "ddgst": ${ddgst:-false} 00:30:28.424 }, 00:30:28.424 "method": "bdev_nvme_attach_controller" 00:30:28.424 } 00:30:28.424 EOF 00:30:28.424 )") 00:30:28.424 15:11:13 -- common/autotest_common.sh@1327 -- # shift 00:30:28.424 15:11:13 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:28.424 15:11:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:28.424 15:11:13 -- nvmf/common.sh@543 -- # cat 00:30:28.424 15:11:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:28.424 15:11:13 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:28.424 15:11:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:28.424 15:11:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:28.424 15:11:13 -- target/dif.sh@72 -- # (( file <= files )) 00:30:28.424 15:11:13 -- nvmf/common.sh@545 -- # jq . 00:30:28.424 15:11:13 -- nvmf/common.sh@546 -- # IFS=, 00:30:28.424 15:11:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:28.424 "params": { 00:30:28.424 "name": "Nvme0", 00:30:28.424 "trtype": "tcp", 00:30:28.424 "traddr": "10.0.0.2", 00:30:28.424 "adrfam": "ipv4", 00:30:28.424 "trsvcid": "4420", 00:30:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:28.424 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:28.424 "hdgst": false, 00:30:28.424 "ddgst": false 00:30:28.424 }, 00:30:28.424 "method": "bdev_nvme_attach_controller" 00:30:28.424 }' 00:30:28.424 15:11:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:28.424 15:11:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:28.424 15:11:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:28.424 15:11:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:28.424 15:11:13 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:30:28.424 15:11:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:28.424 15:11:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:28.424 15:11:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:28.424 15:11:13 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:28.424 15:11:13 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:28.683 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:28.683 fio-3.35 00:30:28.683 Starting 1 thread 00:30:28.683 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.871 00:30:40.871 filename0: (groupid=0, jobs=1): err= 0: pid=3916688: Fri Apr 26 15:11:24 2024 00:30:40.871 read: IOPS=189, BW=756KiB/s (775kB/s)(7568KiB/10004msec) 00:30:40.871 slat (usec): min=4, max=513, avg=10.25, stdev=12.71 00:30:40.871 clat (usec): min=571, max=47944, avg=21117.00, stdev=20257.44 00:30:40.871 lat (usec): min=579, max=47960, avg=21127.24, stdev=20257.27 00:30:40.871 clat percentiles (usec): 00:30:40.871 | 1.00th=[ 619], 5.00th=[ 644], 10.00th=[ 668], 20.00th=[ 709], 00:30:40.871 | 30.00th=[ 766], 40.00th=[ 816], 50.00th=[41157], 60.00th=[41157], 00:30:40.871 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:40.871 | 99.00th=[41681], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:30:40.871 | 99.99th=[47973] 00:30:40.871 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=757.89, stdev=23.98, samples=19 00:30:40.871 iops : min= 176, max= 192, avg=189.47, stdev= 5.99, samples=19 00:30:40.871 lat (usec) : 750=28.17%, 1000=21.51% 00:30:40.871 lat (msec) : 50=50.32% 00:30:40.871 cpu : usr=90.04%, sys=9.69%, ctx=15, majf=0, minf=169 00:30:40.871 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:40.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.871 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.871 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:40.871 00:30:40.871 Run status group 0 (all jobs): 00:30:40.871 READ: bw=756KiB/s (775kB/s), 756KiB/s-756KiB/s (775kB/s-775kB/s), io=7568KiB (7750kB), run=10004-10004msec 00:30:40.871 15:11:25 -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:40.871 15:11:25 -- target/dif.sh@43 -- # local sub 00:30:40.871 15:11:25 -- target/dif.sh@45 -- # for sub in "$@" 00:30:40.871 15:11:25 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:40.871 15:11:25 -- target/dif.sh@36 -- # local sub_id=0 00:30:40.871 15:11:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:40.871 15:11:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.871 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:30:40.871 15:11:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.871 15:11:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:40.871 15:11:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.871 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:30:40.871 15:11:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.871 00:30:40.871 real 0m11.117s 00:30:40.871 user 0m10.197s 00:30:40.871 sys 0m1.238s 00:30:40.871 15:11:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:40.871 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:30:40.871 ************************************ 00:30:40.871 END TEST fio_dif_1_default 00:30:40.871 ************************************ 00:30:40.871 15:11:25 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:40.871 15:11:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:40.871 15:11:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:40.871 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:30:40.871 ************************************ 00:30:40.871 START TEST fio_dif_1_multi_subsystems 00:30:40.871 ************************************ 00:30:40.871 15:11:25 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:30:40.871 15:11:25 -- target/dif.sh@92 -- # local files=1 00:30:40.871 15:11:25 -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:40.871 15:11:25 -- target/dif.sh@28 -- # local sub 00:30:40.871 15:11:25 -- target/dif.sh@30 -- # for sub in "$@" 00:30:40.871 15:11:25 -- target/dif.sh@31 -- # create_subsystem 0 00:30:40.871 15:11:25 -- target/dif.sh@18 -- # local sub_id=0 00:30:40.871 15:11:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:40.871 15:11:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.871 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:30:40.871 bdev_null0 00:30:40.871 15:11:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.871 15:11:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:40.871 15:11:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.871 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:30:40.871 15:11:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.871 15:11:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:40.871 15:11:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.871 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:30:40.871 15:11:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.871 15:11:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:40.871 15:11:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.871 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:30:40.871 [2024-04-26 15:11:25.185569] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.871 15:11:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.871 15:11:25 -- target/dif.sh@30 -- # for sub in "$@" 00:30:40.871 15:11:25 -- target/dif.sh@31 -- # create_subsystem 1 00:30:40.871 15:11:25 -- target/dif.sh@18 -- # local sub_id=1 00:30:40.871 15:11:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:40.871 15:11:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.871 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:30:40.871 bdev_null1 00:30:40.871 15:11:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.871 15:11:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:40.871 15:11:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.871 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:30:40.871 15:11:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.871 15:11:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:40.871 15:11:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.871 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:30:40.871 15:11:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.872 15:11:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.872 15:11:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.872 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:30:40.872 15:11:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.872 15:11:25 -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:40.872 15:11:25 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:40.872 15:11:25 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:40.872 15:11:25 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:40.872 15:11:25 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:40.872 15:11:25 -- nvmf/common.sh@521 -- # config=() 00:30:40.872 15:11:25 -- target/dif.sh@82 -- # gen_fio_conf 00:30:40.872 15:11:25 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:40.872 15:11:25 -- nvmf/common.sh@521 -- # local subsystem config 00:30:40.872 15:11:25 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:40.872 15:11:25 -- target/dif.sh@54 -- # local file 00:30:40.872 15:11:25 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:40.872 15:11:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:40.872 15:11:25 -- target/dif.sh@56 -- # cat 00:30:40.872 15:11:25 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:40.872 15:11:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:40.872 { 00:30:40.872 "params": { 00:30:40.872 "name": "Nvme$subsystem", 00:30:40.872 "trtype": "$TEST_TRANSPORT", 00:30:40.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.872 "adrfam": "ipv4", 00:30:40.872 "trsvcid": "$NVMF_PORT", 00:30:40.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.872 "hdgst": ${hdgst:-false}, 00:30:40.872 "ddgst": ${ddgst:-false} 00:30:40.872 }, 00:30:40.872 "method": "bdev_nvme_attach_controller" 00:30:40.872 } 00:30:40.872 EOF 00:30:40.872 )") 00:30:40.872 15:11:25 -- common/autotest_common.sh@1327 -- # shift 00:30:40.872 15:11:25 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:40.872 15:11:25 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:40.872 15:11:25 -- nvmf/common.sh@543 -- # cat 00:30:40.872 15:11:25 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:40.872 15:11:25 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:40.872 15:11:25 -- target/dif.sh@72 -- # (( file <= files )) 00:30:40.872 15:11:25 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:40.872 15:11:25 -- target/dif.sh@73 -- # cat 00:30:40.872 15:11:25 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:40.872 15:11:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:40.872 15:11:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:40.872 { 00:30:40.872 "params": { 00:30:40.872 "name": "Nvme$subsystem", 00:30:40.872 "trtype": "$TEST_TRANSPORT", 00:30:40.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.872 "adrfam": "ipv4", 00:30:40.872 "trsvcid": "$NVMF_PORT", 00:30:40.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.872 "hdgst": ${hdgst:-false}, 00:30:40.872 "ddgst": ${ddgst:-false} 00:30:40.872 }, 00:30:40.872 "method": "bdev_nvme_attach_controller" 00:30:40.872 } 00:30:40.872 EOF 00:30:40.872 )") 00:30:40.872 15:11:25 -- target/dif.sh@72 -- # (( file++ )) 00:30:40.872 15:11:25 -- target/dif.sh@72 -- # (( file <= files )) 00:30:40.872 15:11:25 -- nvmf/common.sh@543 -- # cat 00:30:40.872 15:11:25 -- nvmf/common.sh@545 -- # jq . 00:30:40.872 15:11:25 -- nvmf/common.sh@546 -- # IFS=, 00:30:40.872 15:11:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:40.872 "params": { 00:30:40.872 "name": "Nvme0", 00:30:40.872 "trtype": "tcp", 00:30:40.872 "traddr": "10.0.0.2", 00:30:40.872 "adrfam": "ipv4", 00:30:40.872 "trsvcid": "4420", 00:30:40.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:40.872 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:40.872 "hdgst": false, 00:30:40.872 "ddgst": false 00:30:40.872 }, 00:30:40.872 "method": "bdev_nvme_attach_controller" 00:30:40.872 },{ 00:30:40.872 "params": { 00:30:40.872 "name": "Nvme1", 00:30:40.872 "trtype": "tcp", 00:30:40.872 "traddr": "10.0.0.2", 00:30:40.872 "adrfam": "ipv4", 00:30:40.872 "trsvcid": "4420", 00:30:40.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:40.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:40.872 "hdgst": false, 00:30:40.872 "ddgst": false 00:30:40.872 }, 00:30:40.872 "method": "bdev_nvme_attach_controller" 00:30:40.872 }' 00:30:40.872 15:11:25 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:40.872 15:11:25 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:40.872 15:11:25 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:40.872 15:11:25 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:40.872 15:11:25 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:30:40.872 15:11:25 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:40.872 15:11:25 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:40.872 15:11:25 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:40.872 15:11:25 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:40.872 15:11:25 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:40.872 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:40.872 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:40.872 fio-3.35 00:30:40.872 Starting 2 threads 00:30:40.872 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.836 00:30:50.836 filename0: (groupid=0, jobs=1): err= 0: pid=3918120: Fri Apr 26 15:11:36 2024 00:30:50.836 read: IOPS=95, BW=384KiB/s (393kB/s)(3840KiB/10004msec) 00:30:50.836 slat (nsec): min=7582, max=33751, avg=10301.54, stdev=4269.19 00:30:50.836 clat (usec): min=40877, max=42964, avg=41650.19, stdev=490.98 00:30:50.836 lat (usec): min=40886, max=42984, avg=41660.49, stdev=490.87 00:30:50.836 clat percentiles (usec): 00:30:50.836 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:50.836 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:30:50.836 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:50.836 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:30:50.836 | 99.99th=[42730] 00:30:50.836 bw ( KiB/s): min= 352, max= 416, per=49.76%, avg=382.40, stdev=19.35, samples=20 00:30:50.836 iops : min= 88, max= 104, avg=95.60, stdev= 4.84, samples=20 00:30:50.836 lat (msec) : 50=100.00% 00:30:50.836 cpu : usr=95.10%, sys=4.59%, ctx=18, majf=0, minf=128 00:30:50.836 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:50.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.836 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.836 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:50.836 filename1: (groupid=0, jobs=1): err= 0: pid=3918121: Fri Apr 26 15:11:36 2024 00:30:50.836 read: IOPS=95, BW=384KiB/s (393kB/s)(3840KiB/10001msec) 00:30:50.836 slat (usec): min=7, max=105, avg=10.35, stdev= 5.71 00:30:50.836 clat (usec): min=40788, max=42720, avg=41637.17, stdev=478.57 00:30:50.836 lat (usec): min=40795, max=42753, avg=41647.52, stdev=478.45 00:30:50.836 clat percentiles (usec): 00:30:50.836 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:50.836 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:30:50.836 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:50.836 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:30:50.836 | 99.99th=[42730] 00:30:50.836 bw ( KiB/s): min= 352, max= 416, per=50.02%, avg=384.00, stdev=10.67, samples=19 00:30:50.836 iops : min= 88, max= 104, avg=96.00, stdev= 2.67, samples=19 00:30:50.836 lat (msec) : 50=100.00% 00:30:50.836 cpu : usr=95.16%, sys=4.47%, ctx=21, majf=0, minf=228 00:30:50.836 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:50.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.836 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.836 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:50.836 00:30:50.836 Run status group 0 (all jobs): 00:30:50.836 READ: bw=768KiB/s (786kB/s), 384KiB/s-384KiB/s (393kB/s-393kB/s), io=7680KiB (7864kB), run=10001-10004msec 00:30:50.836 15:11:36 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:50.836 15:11:36 -- target/dif.sh@43 -- # local sub 00:30:50.836 15:11:36 -- target/dif.sh@45 -- # for sub in "$@" 00:30:50.836 15:11:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:50.836 15:11:36 -- target/dif.sh@36 -- # local sub_id=0 00:30:50.836 15:11:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:50.836 15:11:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.836 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:30:50.836 15:11:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.836 15:11:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:50.836 15:11:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.836 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:30:50.836 15:11:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.836 15:11:36 -- target/dif.sh@45 -- # for sub in "$@" 00:30:50.836 15:11:36 -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:50.836 15:11:36 -- target/dif.sh@36 -- # local sub_id=1 00:30:50.836 15:11:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:50.836 15:11:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.836 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:30:50.836 15:11:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.836 15:11:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:50.836 15:11:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.836 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:30:50.836 15:11:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.836 00:30:50.836 real 0m11.273s 00:30:50.836 user 0m20.113s 00:30:50.836 sys 0m1.205s 00:30:50.836 15:11:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:50.836 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:30:50.836 ************************************ 00:30:50.836 END TEST fio_dif_1_multi_subsystems 00:30:50.836 ************************************ 00:30:50.836 15:11:36 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:50.836 15:11:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:50.836 15:11:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:50.836 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:30:50.836 ************************************ 00:30:50.836 START TEST fio_dif_rand_params 00:30:50.836 ************************************ 00:30:50.836 15:11:36 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:30:50.836 15:11:36 -- target/dif.sh@100 -- # local NULL_DIF 00:30:50.836 15:11:36 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:50.836 15:11:36 -- target/dif.sh@103 -- # NULL_DIF=3 00:30:50.836 15:11:36 -- target/dif.sh@103 -- # bs=128k 00:30:50.836 15:11:36 -- target/dif.sh@103 -- # numjobs=3 00:30:50.836 15:11:36 -- target/dif.sh@103 -- # iodepth=3 00:30:50.836 15:11:36 -- target/dif.sh@103 -- # runtime=5 00:30:50.836 15:11:36 -- target/dif.sh@105 -- # create_subsystems 0 00:30:50.836 15:11:36 -- target/dif.sh@28 -- # local sub 00:30:50.836 15:11:36 -- target/dif.sh@30 -- # for sub in "$@" 00:30:50.836 15:11:36 -- target/dif.sh@31 -- # create_subsystem 0 00:30:50.836 15:11:36 -- target/dif.sh@18 -- # local sub_id=0 00:30:50.836 15:11:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:50.836 15:11:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.836 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:30:50.836 bdev_null0 00:30:50.836 15:11:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.836 15:11:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:50.836 15:11:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.836 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:30:51.095 15:11:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.095 15:11:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:51.095 15:11:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:51.095 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:30:51.095 15:11:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.095 15:11:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:51.095 15:11:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:51.095 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:30:51.095 [2024-04-26 15:11:36.588725] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.095 15:11:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.095 15:11:36 -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:51.095 15:11:36 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:51.095 15:11:36 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:51.095 15:11:36 -- nvmf/common.sh@521 -- # config=() 00:30:51.095 15:11:36 -- nvmf/common.sh@521 -- # local subsystem config 00:30:51.095 15:11:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:51.096 15:11:36 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:51.096 15:11:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:51.096 { 00:30:51.096 "params": { 00:30:51.096 "name": "Nvme$subsystem", 00:30:51.096 "trtype": "$TEST_TRANSPORT", 00:30:51.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.096 "adrfam": "ipv4", 00:30:51.096 "trsvcid": "$NVMF_PORT", 00:30:51.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.096 "hdgst": ${hdgst:-false}, 00:30:51.096 "ddgst": ${ddgst:-false} 00:30:51.096 }, 00:30:51.096 "method": "bdev_nvme_attach_controller" 00:30:51.096 } 00:30:51.096 EOF 00:30:51.096 )") 00:30:51.096 15:11:36 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:51.096 15:11:36 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:51.096 15:11:36 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:51.096 15:11:36 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:51.096 15:11:36 -- target/dif.sh@82 -- # gen_fio_conf 00:30:51.096 15:11:36 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:51.096 15:11:36 -- target/dif.sh@54 -- # local file 00:30:51.096 15:11:36 -- common/autotest_common.sh@1327 -- # shift 00:30:51.096 15:11:36 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:51.096 15:11:36 -- target/dif.sh@56 -- # cat 00:30:51.096 15:11:36 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:51.096 15:11:36 -- nvmf/common.sh@543 -- # cat 00:30:51.096 15:11:36 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:51.096 15:11:36 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:51.096 15:11:36 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:51.096 15:11:36 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:51.096 15:11:36 -- target/dif.sh@72 -- # (( file <= files )) 00:30:51.096 15:11:36 -- nvmf/common.sh@545 -- # jq . 00:30:51.096 15:11:36 -- nvmf/common.sh@546 -- # IFS=, 00:30:51.096 15:11:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:51.096 "params": { 00:30:51.096 "name": "Nvme0", 00:30:51.096 "trtype": "tcp", 00:30:51.096 "traddr": "10.0.0.2", 00:30:51.096 "adrfam": "ipv4", 00:30:51.096 "trsvcid": "4420", 00:30:51.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:51.096 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:51.096 "hdgst": false, 00:30:51.096 "ddgst": false 00:30:51.096 }, 00:30:51.096 "method": "bdev_nvme_attach_controller" 00:30:51.096 }' 00:30:51.096 15:11:36 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:51.096 15:11:36 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:51.096 15:11:36 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:51.096 15:11:36 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:51.096 15:11:36 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:30:51.096 15:11:36 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:51.096 15:11:36 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:51.096 15:11:36 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:51.096 15:11:36 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:51.096 15:11:36 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:51.353 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:51.353 ... 00:30:51.353 fio-3.35 00:30:51.353 Starting 3 threads 00:30:51.353 EAL: No free 2048 kB hugepages reported on node 1 00:30:57.916 00:30:57.916 filename0: (groupid=0, jobs=1): err= 0: pid=3919582: Fri Apr 26 15:11:42 2024 00:30:57.916 read: IOPS=197, BW=24.6MiB/s (25.8MB/s)(124MiB/5017msec) 00:30:57.916 slat (nsec): min=5058, max=52746, avg=14975.26, stdev=5851.09 00:30:57.916 clat (usec): min=4611, max=92847, avg=15195.85, stdev=13625.88 00:30:57.916 lat (usec): min=4623, max=92861, avg=15210.82, stdev=13625.61 00:30:57.916 clat percentiles (usec): 00:30:57.916 | 1.00th=[ 5276], 5.00th=[ 5997], 10.00th=[ 7308], 20.00th=[ 8717], 00:30:57.916 | 30.00th=[ 9634], 40.00th=[10814], 50.00th=[11731], 60.00th=[12387], 00:30:57.916 | 70.00th=[13173], 80.00th=[14091], 90.00th=[18482], 95.00th=[50594], 00:30:57.916 | 99.00th=[56361], 99.50th=[92799], 99.90th=[92799], 99.95th=[92799], 00:30:57.916 | 99.99th=[92799] 00:30:57.916 bw ( KiB/s): min=15360, max=32577, per=31.31%, avg=25248.10, stdev=5379.98, samples=10 00:30:57.916 iops : min= 120, max= 254, avg=197.20, stdev=41.95, samples=10 00:30:57.916 lat (msec) : 10=33.47%, 20=56.72%, 50=3.64%, 100=6.17% 00:30:57.916 cpu : usr=91.03%, sys=8.47%, ctx=14, majf=0, minf=89 00:30:57.916 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:57.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.916 issued rwts: total=989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.916 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:57.916 filename0: (groupid=0, jobs=1): err= 0: pid=3919583: Fri Apr 26 15:11:42 2024 00:30:57.916 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(136MiB/5045msec) 00:30:57.916 slat (nsec): min=4157, max=49472, avg=13867.99, stdev=4539.79 00:30:57.916 clat (usec): min=4749, max=91077, avg=13852.92, stdev=11711.70 00:30:57.916 lat (usec): min=4762, max=91093, avg=13866.79, stdev=11711.77 00:30:57.916 clat percentiles (usec): 00:30:57.916 | 1.00th=[ 5276], 5.00th=[ 5800], 10.00th=[ 6521], 20.00th=[ 8029], 00:30:57.916 | 30.00th=[ 8848], 40.00th=[10159], 50.00th=[11076], 60.00th=[11731], 00:30:57.916 | 70.00th=[12518], 80.00th=[13304], 90.00th=[16057], 95.00th=[49546], 00:30:57.916 | 99.00th=[53740], 99.50th=[54264], 99.90th=[55837], 99.95th=[90702], 00:30:57.916 | 99.99th=[90702] 00:30:57.916 bw ( KiB/s): min=21504, max=34560, per=34.35%, avg=27699.20, stdev=3488.87, samples=10 00:30:57.916 iops : min= 168, max= 270, avg=216.40, stdev=27.26, samples=10 00:30:57.916 lat (msec) : 10=39.45%, 20=51.98%, 50=3.96%, 100=4.61% 00:30:57.916 cpu : usr=90.31%, sys=9.22%, ctx=18, majf=0, minf=49 00:30:57.916 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:57.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.916 issued rwts: total=1085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.916 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:57.916 filename0: (groupid=0, jobs=1): err= 0: pid=3919584: Fri Apr 26 15:11:42 2024 00:30:57.916 read: IOPS=218, BW=27.4MiB/s (28.7MB/s)(138MiB/5045msec) 00:30:57.916 slat (nsec): min=4877, max=39607, avg=13840.72, stdev=3700.41 00:30:57.916 clat (usec): min=4988, max=92745, avg=13650.34, stdev=10641.32 00:30:57.916 lat (usec): min=5000, max=92760, avg=13664.18, stdev=10641.17 00:30:57.916 clat percentiles (usec): 00:30:57.916 | 1.00th=[ 5669], 5.00th=[ 6194], 10.00th=[ 7373], 20.00th=[ 8455], 00:30:57.916 | 30.00th=[ 9110], 40.00th=[10159], 50.00th=[11207], 60.00th=[12125], 00:30:57.916 | 70.00th=[13042], 80.00th=[14353], 90.00th=[16581], 95.00th=[47449], 00:30:57.916 | 99.00th=[53216], 99.50th=[54264], 99.90th=[92799], 99.95th=[92799], 00:30:57.916 | 99.99th=[92799] 00:30:57.916 bw ( KiB/s): min=16640, max=34304, per=34.99%, avg=28211.20, stdev=5842.44, samples=10 00:30:57.916 iops : min= 130, max= 268, avg=220.40, stdev=45.64, samples=10 00:30:57.916 lat (msec) : 10=38.95%, 20=54.35%, 50=3.80%, 100=2.90% 00:30:57.916 cpu : usr=90.34%, sys=9.16%, ctx=9, majf=0, minf=111 00:30:57.916 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:57.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.917 issued rwts: total=1104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.917 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:57.917 00:30:57.917 Run status group 0 (all jobs): 00:30:57.917 READ: bw=78.7MiB/s (82.6MB/s), 24.6MiB/s-27.4MiB/s (25.8MB/s-28.7MB/s), io=397MiB (417MB), run=5017-5045msec 00:30:57.917 15:11:42 -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:57.917 15:11:42 -- target/dif.sh@43 -- # local sub 00:30:57.917 15:11:42 -- target/dif.sh@45 -- # for sub in "$@" 00:30:57.917 15:11:42 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:57.917 15:11:42 -- target/dif.sh@36 -- # local sub_id=0 00:30:57.917 15:11:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:57.917 15:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.917 15:11:42 -- common/autotest_common.sh@10 -- # set +x 00:30:57.917 15:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.917 15:11:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:57.917 15:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.917 15:11:42 -- common/autotest_common.sh@10 -- # set +x 00:30:57.917 15:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.917 15:11:42 -- target/dif.sh@109 -- # NULL_DIF=2 00:30:57.917 15:11:42 -- target/dif.sh@109 -- # bs=4k 00:30:57.917 15:11:42 -- target/dif.sh@109 -- # numjobs=8 00:30:57.917 15:11:42 -- target/dif.sh@109 -- # iodepth=16 00:30:57.917 15:11:42 -- target/dif.sh@109 -- # runtime= 00:30:57.917 15:11:42 -- target/dif.sh@109 -- # files=2 00:30:57.917 15:11:42 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:57.917 15:11:42 -- target/dif.sh@28 -- # local sub 00:30:57.917 15:11:42 -- target/dif.sh@30 -- # for sub in "$@" 00:30:57.917 15:11:42 -- target/dif.sh@31 -- # create_subsystem 0 00:30:57.917 15:11:42 -- target/dif.sh@18 -- # local sub_id=0 00:30:57.917 15:11:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:57.917 15:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.917 15:11:42 -- common/autotest_common.sh@10 -- # set +x 00:30:57.917 bdev_null0 00:30:57.917 15:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.917 15:11:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:57.917 15:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.917 15:11:42 -- common/autotest_common.sh@10 -- # set +x 00:30:57.917 15:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.917 15:11:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:57.917 15:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.917 15:11:42 -- common/autotest_common.sh@10 -- # set +x 00:30:57.917 15:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.917 15:11:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:57.917 15:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.917 15:11:42 -- common/autotest_common.sh@10 -- # set +x 00:30:57.917 [2024-04-26 15:11:42.761519] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.917 15:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.917 15:11:42 -- target/dif.sh@30 -- # for sub in "$@" 00:30:57.917 15:11:42 -- target/dif.sh@31 -- # create_subsystem 1 00:30:57.917 15:11:42 -- target/dif.sh@18 -- # local sub_id=1 00:30:57.917 15:11:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:57.917 15:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.917 15:11:42 -- common/autotest_common.sh@10 -- # set +x 00:30:57.917 bdev_null1 00:30:57.917 15:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.917 15:11:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:57.917 15:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.917 15:11:42 -- common/autotest_common.sh@10 -- # set +x 00:30:57.917 15:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.917 15:11:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:57.917 15:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.917 15:11:42 -- common/autotest_common.sh@10 -- # set +x 00:30:57.917 15:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.917 15:11:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:57.917 15:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.917 15:11:42 -- common/autotest_common.sh@10 -- # set +x 00:30:57.917 15:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.917 15:11:42 -- target/dif.sh@30 -- # for sub in "$@" 00:30:57.917 15:11:42 -- target/dif.sh@31 -- # create_subsystem 2 00:30:57.917 15:11:42 -- target/dif.sh@18 -- # local sub_id=2 00:30:57.917 15:11:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:57.917 15:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.917 15:11:42 -- common/autotest_common.sh@10 -- # set +x 00:30:57.917 bdev_null2 00:30:57.917 15:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.917 15:11:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:57.917 15:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.917 15:11:42 -- common/autotest_common.sh@10 -- # set +x 00:30:57.917 15:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.917 15:11:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:57.917 15:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.917 15:11:42 -- common/autotest_common.sh@10 -- # set +x 00:30:57.917 15:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.917 15:11:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:57.917 15:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.917 15:11:42 -- common/autotest_common.sh@10 -- # set +x 00:30:57.917 15:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.917 15:11:42 -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:57.917 15:11:42 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:57.917 15:11:42 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:57.917 15:11:42 -- nvmf/common.sh@521 -- # config=() 00:30:57.917 15:11:42 -- nvmf/common.sh@521 -- # local subsystem config 00:30:57.917 15:11:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:57.917 15:11:42 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:57.917 15:11:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:57.917 { 00:30:57.917 "params": { 00:30:57.917 "name": "Nvme$subsystem", 00:30:57.917 "trtype": "$TEST_TRANSPORT", 00:30:57.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:57.917 "adrfam": "ipv4", 00:30:57.917 "trsvcid": "$NVMF_PORT", 00:30:57.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:57.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:57.917 "hdgst": ${hdgst:-false}, 00:30:57.917 "ddgst": ${ddgst:-false} 00:30:57.917 }, 00:30:57.917 "method": "bdev_nvme_attach_controller" 00:30:57.917 } 00:30:57.917 EOF 00:30:57.917 )") 00:30:57.917 15:11:42 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:57.917 15:11:42 -- target/dif.sh@82 -- # gen_fio_conf 00:30:57.917 15:11:42 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:57.917 15:11:42 -- target/dif.sh@54 -- # local file 00:30:57.917 15:11:42 -- target/dif.sh@56 -- # cat 00:30:57.917 15:11:42 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:57.917 15:11:42 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:57.917 15:11:42 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:57.917 15:11:42 -- common/autotest_common.sh@1327 -- # shift 00:30:57.917 15:11:42 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:57.917 15:11:42 -- nvmf/common.sh@543 -- # cat 00:30:57.917 15:11:42 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:57.917 15:11:42 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:57.917 15:11:42 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:57.917 15:11:42 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:57.917 15:11:42 -- target/dif.sh@72 -- # (( file <= files )) 00:30:57.917 15:11:42 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:57.917 15:11:42 -- target/dif.sh@73 -- # cat 00:30:57.917 15:11:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:57.917 15:11:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:57.917 { 00:30:57.917 "params": { 00:30:57.917 "name": "Nvme$subsystem", 00:30:57.917 "trtype": "$TEST_TRANSPORT", 00:30:57.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:57.917 "adrfam": "ipv4", 00:30:57.917 "trsvcid": "$NVMF_PORT", 00:30:57.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:57.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:57.917 "hdgst": ${hdgst:-false}, 00:30:57.917 "ddgst": ${ddgst:-false} 00:30:57.917 }, 00:30:57.917 "method": "bdev_nvme_attach_controller" 00:30:57.917 } 00:30:57.917 EOF 00:30:57.917 )") 00:30:57.917 15:11:42 -- nvmf/common.sh@543 -- # cat 00:30:57.917 15:11:42 -- target/dif.sh@72 -- # (( file++ )) 00:30:57.917 15:11:42 -- target/dif.sh@72 -- # (( file <= files )) 00:30:57.917 15:11:42 -- target/dif.sh@73 -- # cat 00:30:57.917 15:11:42 -- target/dif.sh@72 -- # (( file++ )) 00:30:57.917 15:11:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:57.917 15:11:42 -- target/dif.sh@72 -- # (( file <= files )) 00:30:57.917 15:11:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:57.917 { 00:30:57.917 "params": { 00:30:57.917 "name": "Nvme$subsystem", 00:30:57.917 "trtype": "$TEST_TRANSPORT", 00:30:57.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:57.917 "adrfam": "ipv4", 00:30:57.917 "trsvcid": "$NVMF_PORT", 00:30:57.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:57.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:57.917 "hdgst": ${hdgst:-false}, 00:30:57.918 "ddgst": ${ddgst:-false} 00:30:57.918 }, 00:30:57.918 "method": "bdev_nvme_attach_controller" 00:30:57.918 } 00:30:57.918 EOF 00:30:57.918 )") 00:30:57.918 15:11:42 -- nvmf/common.sh@543 -- # cat 00:30:57.918 15:11:42 -- nvmf/common.sh@545 -- # jq . 00:30:57.918 15:11:42 -- nvmf/common.sh@546 -- # IFS=, 00:30:57.918 15:11:42 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:57.918 "params": { 00:30:57.918 "name": "Nvme0", 00:30:57.918 "trtype": "tcp", 00:30:57.918 "traddr": "10.0.0.2", 00:30:57.918 "adrfam": "ipv4", 00:30:57.918 "trsvcid": "4420", 00:30:57.918 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:57.918 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:57.918 "hdgst": false, 00:30:57.918 "ddgst": false 00:30:57.918 }, 00:30:57.918 "method": "bdev_nvme_attach_controller" 00:30:57.918 },{ 00:30:57.918 "params": { 00:30:57.918 "name": "Nvme1", 00:30:57.918 "trtype": "tcp", 00:30:57.918 "traddr": "10.0.0.2", 00:30:57.918 "adrfam": "ipv4", 00:30:57.918 "trsvcid": "4420", 00:30:57.918 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:57.918 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:57.918 "hdgst": false, 00:30:57.918 "ddgst": false 00:30:57.918 }, 00:30:57.918 "method": "bdev_nvme_attach_controller" 00:30:57.918 },{ 00:30:57.918 "params": { 00:30:57.918 "name": "Nvme2", 00:30:57.918 "trtype": "tcp", 00:30:57.918 "traddr": "10.0.0.2", 00:30:57.918 "adrfam": "ipv4", 00:30:57.918 "trsvcid": "4420", 00:30:57.918 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:57.918 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:57.918 "hdgst": false, 00:30:57.918 "ddgst": false 00:30:57.918 }, 00:30:57.918 "method": "bdev_nvme_attach_controller" 00:30:57.918 }' 00:30:57.918 15:11:42 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:57.918 15:11:42 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:57.918 15:11:42 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:57.918 15:11:42 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:57.918 15:11:42 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:30:57.918 15:11:42 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:57.918 15:11:42 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:57.918 15:11:42 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:57.918 15:11:42 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:57.918 15:11:42 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:57.918 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:57.918 ... 00:30:57.918 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:57.918 ... 00:30:57.918 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:57.918 ... 00:30:57.918 fio-3.35 00:30:57.918 Starting 24 threads 00:30:57.918 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.114 00:31:10.114 filename0: (groupid=0, jobs=1): err= 0: pid=3920336: Fri Apr 26 15:11:54 2024 00:31:10.114 read: IOPS=330, BW=1323KiB/s (1354kB/s)(13.0MiB/10064msec) 00:31:10.114 slat (nsec): min=8199, max=67615, avg=21556.62, stdev=9661.47 00:31:10.114 clat (msec): min=32, max=511, avg=48.18, stdev=58.03 00:31:10.114 lat (msec): min=33, max=511, avg=48.20, stdev=58.03 00:31:10.114 clat percentiles (msec): 00:31:10.114 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.114 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:31:10.114 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 44], 95.00th=[ 44], 00:31:10.114 | 99.00th=[ 376], 99.50th=[ 405], 99.90th=[ 426], 99.95th=[ 510], 00:31:10.114 | 99.99th=[ 510] 00:31:10.114 bw ( KiB/s): min= 126, max= 1920, per=4.13%, avg=1324.85, stdev=699.08, samples=20 00:31:10.114 iops : min= 31, max= 480, avg=331.15, stdev=174.80, samples=20 00:31:10.114 lat (msec) : 50=95.67%, 100=0.48%, 250=0.96%, 500=2.82%, 750=0.06% 00:31:10.114 cpu : usr=97.69%, sys=1.59%, ctx=162, majf=0, minf=29 00:31:10.114 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:10.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.114 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.114 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.114 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.114 filename0: (groupid=0, jobs=1): err= 0: pid=3920337: Fri Apr 26 15:11:54 2024 00:31:10.114 read: IOPS=330, BW=1321KiB/s (1353kB/s)(13.0MiB/10074msec) 00:31:10.114 slat (usec): min=9, max=102, avg=40.06, stdev=13.01 00:31:10.114 clat (msec): min=32, max=413, avg=48.05, stdev=60.17 00:31:10.114 lat (msec): min=32, max=413, avg=48.09, stdev=60.17 00:31:10.114 clat percentiles (msec): 00:31:10.114 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.114 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:10.114 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 44], 95.00th=[ 44], 00:31:10.114 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 414], 99.95th=[ 414], 00:31:10.114 | 99.99th=[ 414] 00:31:10.114 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1324.80, stdev=698.96, samples=20 00:31:10.114 iops : min= 32, max= 480, avg=331.20, stdev=174.74, samples=20 00:31:10.114 lat (msec) : 50=95.67%, 100=0.96%, 250=0.36%, 500=3.00% 00:31:10.114 cpu : usr=97.59%, sys=1.74%, ctx=67, majf=0, minf=27 00:31:10.114 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:10.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.115 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.115 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.115 filename0: (groupid=0, jobs=1): err= 0: pid=3920338: Fri Apr 26 15:11:54 2024 00:31:10.115 read: IOPS=333, BW=1336KiB/s (1368kB/s)(13.1MiB/10079msec) 00:31:10.115 slat (nsec): min=7945, max=94154, avg=38215.12, stdev=12120.53 00:31:10.115 clat (msec): min=25, max=386, avg=47.51, stdev=48.68 00:31:10.115 lat (msec): min=25, max=386, avg=47.55, stdev=48.68 00:31:10.115 clat percentiles (msec): 00:31:10.115 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.115 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:10.115 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 44], 95.00th=[ 70], 00:31:10.115 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 380], 99.95th=[ 388], 00:31:10.115 | 99.99th=[ 388] 00:31:10.115 bw ( KiB/s): min= 192, max= 1920, per=4.18%, avg=1340.15, stdev=674.76, samples=20 00:31:10.115 iops : min= 48, max= 480, avg=335.00, stdev=168.67, samples=20 00:31:10.115 lat (msec) : 50=94.59%, 100=0.48%, 250=2.44%, 500=2.50% 00:31:10.115 cpu : usr=97.81%, sys=1.60%, ctx=47, majf=0, minf=36 00:31:10.115 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:10.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.115 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.115 issued rwts: total=3366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.115 filename0: (groupid=0, jobs=1): err= 0: pid=3920339: Fri Apr 26 15:11:54 2024 00:31:10.115 read: IOPS=338, BW=1356KiB/s (1388kB/s)(13.4MiB/10103msec) 00:31:10.115 slat (usec): min=6, max=146, avg=26.72, stdev=20.78 00:31:10.115 clat (msec): min=10, max=289, avg=46.91, stdev=45.68 00:31:10.115 lat (msec): min=10, max=289, avg=46.94, stdev=45.68 00:31:10.115 clat percentiles (msec): 00:31:10.115 | 1.00th=[ 25], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.115 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:10.115 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 44], 95.00th=[ 176], 00:31:10.115 | 99.00th=[ 264], 99.50th=[ 271], 99.90th=[ 275], 99.95th=[ 292], 00:31:10.115 | 99.99th=[ 292] 00:31:10.115 bw ( KiB/s): min= 256, max= 1920, per=4.26%, avg=1363.20, stdev=678.93, samples=20 00:31:10.115 iops : min= 64, max= 480, avg=340.80, stdev=169.73, samples=20 00:31:10.115 lat (msec) : 20=0.53%, 50=94.33%, 250=2.28%, 500=2.86% 00:31:10.115 cpu : usr=97.48%, sys=1.59%, ctx=123, majf=0, minf=24 00:31:10.115 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:10.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.115 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.115 issued rwts: total=3424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.115 filename0: (groupid=0, jobs=1): err= 0: pid=3920340: Fri Apr 26 15:11:54 2024 00:31:10.115 read: IOPS=334, BW=1339KiB/s (1371kB/s)(13.2MiB/10088msec) 00:31:10.115 slat (usec): min=8, max=122, avg=33.95, stdev=25.62 00:31:10.115 clat (msec): min=32, max=402, avg=47.51, stdev=50.43 00:31:10.115 lat (msec): min=32, max=402, avg=47.54, stdev=50.44 00:31:10.115 clat percentiles (msec): 00:31:10.115 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.115 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:31:10.115 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 44], 95.00th=[ 44], 00:31:10.115 | 99.00th=[ 279], 99.50th=[ 351], 99.90th=[ 401], 99.95th=[ 401], 00:31:10.115 | 99.99th=[ 401] 00:31:10.115 bw ( KiB/s): min= 144, max= 1920, per=4.20%, avg=1344.00, stdev=682.88, samples=20 00:31:10.115 iops : min= 36, max= 480, avg=336.00, stdev=170.72, samples=20 00:31:10.115 lat (msec) : 50=95.26%, 250=2.25%, 500=2.49% 00:31:10.115 cpu : usr=98.28%, sys=1.30%, ctx=18, majf=0, minf=41 00:31:10.115 IO depths : 1=5.8%, 2=12.1%, 4=24.9%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:10.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.115 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.115 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.115 filename0: (groupid=0, jobs=1): err= 0: pid=3920341: Fri Apr 26 15:11:54 2024 00:31:10.115 read: IOPS=331, BW=1327KiB/s (1359kB/s)(13.1MiB/10077msec) 00:31:10.115 slat (usec): min=10, max=159, avg=52.63, stdev=23.17 00:31:10.115 clat (msec): min=32, max=505, avg=47.71, stdev=57.63 00:31:10.115 lat (msec): min=32, max=505, avg=47.76, stdev=57.63 00:31:10.115 clat percentiles (msec): 00:31:10.115 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.115 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:10.115 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 44], 95.00th=[ 44], 00:31:10.115 | 99.00th=[ 363], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 506], 00:31:10.115 | 99.99th=[ 506] 00:31:10.115 bw ( KiB/s): min= 128, max= 1920, per=4.16%, avg=1331.20, stdev=705.12, samples=20 00:31:10.115 iops : min= 32, max= 480, avg=332.80, stdev=176.28, samples=20 00:31:10.115 lat (msec) : 50=96.17%, 250=0.96%, 500=2.81%, 750=0.06% 00:31:10.115 cpu : usr=94.60%, sys=3.09%, ctx=240, majf=0, minf=24 00:31:10.115 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:10.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.115 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.115 issued rwts: total=3344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.115 filename0: (groupid=0, jobs=1): err= 0: pid=3920342: Fri Apr 26 15:11:54 2024 00:31:10.115 read: IOPS=335, BW=1340KiB/s (1373kB/s)(13.2MiB/10075msec) 00:31:10.115 slat (usec): min=8, max=135, avg=39.67, stdev=15.19 00:31:10.115 clat (msec): min=32, max=364, avg=47.10, stdev=47.05 00:31:10.115 lat (msec): min=32, max=364, avg=47.14, stdev=47.04 00:31:10.115 clat percentiles (msec): 00:31:10.115 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.115 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:10.115 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 44], 95.00th=[ 45], 00:31:10.115 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 363], 00:31:10.115 | 99.99th=[ 363] 00:31:10.115 bw ( KiB/s): min= 240, max= 1920, per=4.21%, avg=1349.60, stdev=672.91, samples=20 00:31:10.115 iops : min= 60, max= 480, avg=337.40, stdev=168.23, samples=20 00:31:10.115 lat (msec) : 50=95.26%, 250=2.25%, 500=2.49% 00:31:10.115 cpu : usr=97.42%, sys=2.06%, ctx=61, majf=0, minf=34 00:31:10.116 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:10.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.116 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.116 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.116 filename0: (groupid=0, jobs=1): err= 0: pid=3920343: Fri Apr 26 15:11:54 2024 00:31:10.116 read: IOPS=335, BW=1340KiB/s (1373kB/s)(13.2MiB/10075msec) 00:31:10.116 slat (usec): min=8, max=105, avg=32.90, stdev=13.89 00:31:10.116 clat (msec): min=26, max=359, avg=47.17, stdev=47.02 00:31:10.116 lat (msec): min=26, max=359, avg=47.20, stdev=47.02 00:31:10.116 clat percentiles (msec): 00:31:10.116 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.116 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:10.116 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 44], 95.00th=[ 45], 00:31:10.116 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 359], 00:31:10.116 | 99.99th=[ 359] 00:31:10.116 bw ( KiB/s): min= 240, max= 1920, per=4.21%, avg=1349.60, stdev=672.91, samples=20 00:31:10.116 iops : min= 60, max= 480, avg=337.40, stdev=168.23, samples=20 00:31:10.116 lat (msec) : 50=95.20%, 100=0.06%, 250=2.25%, 500=2.49% 00:31:10.116 cpu : usr=97.50%, sys=1.89%, ctx=35, majf=0, minf=31 00:31:10.116 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:10.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.116 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.116 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.116 filename1: (groupid=0, jobs=1): err= 0: pid=3920344: Fri Apr 26 15:11:54 2024 00:31:10.116 read: IOPS=338, BW=1356KiB/s (1388kB/s)(13.4MiB/10103msec) 00:31:10.116 slat (usec): min=6, max=142, avg=27.74, stdev=18.79 00:31:10.116 clat (msec): min=10, max=300, avg=46.92, stdev=45.70 00:31:10.116 lat (msec): min=10, max=300, avg=46.94, stdev=45.70 00:31:10.116 clat percentiles (msec): 00:31:10.116 | 1.00th=[ 26], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.116 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:10.116 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 44], 95.00th=[ 174], 00:31:10.116 | 99.00th=[ 264], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 300], 00:31:10.116 | 99.99th=[ 300] 00:31:10.116 bw ( KiB/s): min= 256, max= 1920, per=4.26%, avg=1363.20, stdev=677.66, samples=20 00:31:10.116 iops : min= 64, max= 480, avg=340.80, stdev=169.42, samples=20 00:31:10.116 lat (msec) : 20=0.53%, 50=94.33%, 250=2.28%, 500=2.86% 00:31:10.116 cpu : usr=96.47%, sys=2.14%, ctx=147, majf=0, minf=34 00:31:10.116 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:10.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.116 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.116 issued rwts: total=3424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.116 filename1: (groupid=0, jobs=1): err= 0: pid=3920345: Fri Apr 26 15:11:54 2024 00:31:10.116 read: IOPS=334, BW=1339KiB/s (1371kB/s)(13.2MiB/10078msec) 00:31:10.116 slat (nsec): min=8195, max=68688, avg=19478.53, stdev=8136.73 00:31:10.116 clat (msec): min=33, max=365, avg=47.59, stdev=47.98 00:31:10.116 lat (msec): min=33, max=365, avg=47.61, stdev=47.98 00:31:10.116 clat percentiles (msec): 00:31:10.116 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.116 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:31:10.116 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 44], 95.00th=[ 86], 00:31:10.116 | 99.00th=[ 268], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 368], 00:31:10.116 | 99.99th=[ 368] 00:31:10.116 bw ( KiB/s): min= 240, max= 1920, per=4.19%, avg=1343.05, stdev=666.75, samples=20 00:31:10.116 iops : min= 60, max= 480, avg=335.75, stdev=166.71, samples=20 00:31:10.116 lat (msec) : 50=94.37%, 100=0.89%, 250=2.31%, 500=2.43% 00:31:10.116 cpu : usr=94.52%, sys=3.25%, ctx=399, majf=0, minf=28 00:31:10.116 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:10.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.116 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.116 issued rwts: total=3374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.116 filename1: (groupid=0, jobs=1): err= 0: pid=3920346: Fri Apr 26 15:11:54 2024 00:31:10.116 read: IOPS=336, BW=1345KiB/s (1377kB/s)(13.2MiB/10087msec) 00:31:10.116 slat (usec): min=8, max=136, avg=36.61, stdev=20.08 00:31:10.116 clat (msec): min=32, max=330, avg=47.17, stdev=46.15 00:31:10.116 lat (msec): min=32, max=330, avg=47.20, stdev=46.15 00:31:10.116 clat percentiles (msec): 00:31:10.116 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.116 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:10.116 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 44], 95.00th=[ 120], 00:31:10.116 | 99.00th=[ 266], 99.50th=[ 271], 99.90th=[ 321], 99.95th=[ 330], 00:31:10.116 | 99.99th=[ 330] 00:31:10.116 bw ( KiB/s): min= 256, max= 1920, per=4.21%, avg=1350.40, stdev=671.53, samples=20 00:31:10.116 iops : min= 64, max= 480, avg=337.60, stdev=167.88, samples=20 00:31:10.116 lat (msec) : 50=94.81%, 250=2.30%, 500=2.89% 00:31:10.116 cpu : usr=97.76%, sys=1.62%, ctx=31, majf=0, minf=52 00:31:10.116 IO depths : 1=6.0%, 2=12.1%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:10.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.116 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.116 issued rwts: total=3392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.116 filename1: (groupid=0, jobs=1): err= 0: pid=3920347: Fri Apr 26 15:11:54 2024 00:31:10.116 read: IOPS=330, BW=1321KiB/s (1353kB/s)(13.0MiB/10074msec) 00:31:10.116 slat (usec): min=8, max=116, avg=40.83, stdev=13.17 00:31:10.116 clat (msec): min=32, max=413, avg=48.05, stdev=60.18 00:31:10.116 lat (msec): min=32, max=413, avg=48.09, stdev=60.18 00:31:10.116 clat percentiles (msec): 00:31:10.116 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.116 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:10.116 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 44], 95.00th=[ 44], 00:31:10.116 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 414], 99.95th=[ 414], 00:31:10.116 | 99.99th=[ 414] 00:31:10.116 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1324.80, stdev=698.96, samples=20 00:31:10.116 iops : min= 32, max= 480, avg=331.20, stdev=174.74, samples=20 00:31:10.117 lat (msec) : 50=95.67%, 100=0.96%, 250=0.42%, 500=2.94% 00:31:10.117 cpu : usr=97.52%, sys=1.74%, ctx=83, majf=0, minf=30 00:31:10.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:10.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.117 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.117 filename1: (groupid=0, jobs=1): err= 0: pid=3920348: Fri Apr 26 15:11:54 2024 00:31:10.117 read: IOPS=337, BW=1350KiB/s (1383kB/s)(13.2MiB/10001msec) 00:31:10.117 slat (usec): min=8, max=110, avg=36.85, stdev=15.15 00:31:10.117 clat (msec): min=25, max=277, avg=47.10, stdev=46.76 00:31:10.117 lat (msec): min=25, max=277, avg=47.13, stdev=46.76 00:31:10.117 clat percentiles (msec): 00:31:10.117 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.117 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:10.117 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 44], 95.00th=[ 45], 00:31:10.117 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 279], 00:31:10.117 | 99.99th=[ 279] 00:31:10.117 bw ( KiB/s): min= 256, max= 1920, per=4.40%, avg=1408.00, stdev=637.15, samples=19 00:31:10.117 iops : min= 64, max= 480, avg=352.00, stdev=159.29, samples=19 00:31:10.117 lat (msec) : 50=95.26%, 250=2.37%, 500=2.37% 00:31:10.117 cpu : usr=94.82%, sys=2.93%, ctx=334, majf=0, minf=34 00:31:10.117 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:10.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.117 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.117 filename1: (groupid=0, jobs=1): err= 0: pid=3920349: Fri Apr 26 15:11:54 2024 00:31:10.117 read: IOPS=333, BW=1336KiB/s (1368kB/s)(13.1MiB/10078msec) 00:31:10.117 slat (usec): min=8, max=135, avg=31.84, stdev=18.41 00:31:10.117 clat (msec): min=32, max=374, avg=47.56, stdev=48.15 00:31:10.117 lat (msec): min=32, max=374, avg=47.59, stdev=48.15 00:31:10.117 clat percentiles (msec): 00:31:10.117 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.117 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:10.117 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 44], 95.00th=[ 68], 00:31:10.117 | 99.00th=[ 275], 99.50th=[ 279], 99.90th=[ 363], 99.95th=[ 376], 00:31:10.117 | 99.99th=[ 376] 00:31:10.117 bw ( KiB/s): min= 192, max= 1920, per=4.18%, avg=1340.00, stdev=674.68, samples=20 00:31:10.117 iops : min= 48, max= 480, avg=335.00, stdev=168.67, samples=20 00:31:10.117 lat (msec) : 50=94.59%, 100=0.48%, 250=2.67%, 500=2.26% 00:31:10.117 cpu : usr=97.32%, sys=1.61%, ctx=44, majf=0, minf=26 00:31:10.117 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:10.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.117 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.117 issued rwts: total=3366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.117 filename1: (groupid=0, jobs=1): err= 0: pid=3920350: Fri Apr 26 15:11:54 2024 00:31:10.117 read: IOPS=329, BW=1320KiB/s (1352kB/s)(13.0MiB/10074msec) 00:31:10.117 slat (usec): min=9, max=156, avg=42.21, stdev=18.54 00:31:10.117 clat (msec): min=22, max=510, avg=48.07, stdev=60.50 00:31:10.117 lat (msec): min=22, max=510, avg=48.12, stdev=60.50 00:31:10.117 clat percentiles (msec): 00:31:10.117 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.117 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:10.117 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 44], 95.00th=[ 45], 00:31:10.117 | 99.00th=[ 397], 99.50th=[ 414], 99.90th=[ 502], 99.95th=[ 510], 00:31:10.117 | 99.99th=[ 510] 00:31:10.117 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1323.35, stdev=698.11, samples=20 00:31:10.117 iops : min= 32, max= 480, avg=330.80, stdev=174.51, samples=20 00:31:10.117 lat (msec) : 50=95.43%, 100=1.20%, 250=0.30%, 500=2.95%, 750=0.12% 00:31:10.117 cpu : usr=98.26%, sys=1.23%, ctx=59, majf=0, minf=37 00:31:10.117 IO depths : 1=5.7%, 2=11.8%, 4=24.3%, 8=51.3%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:10.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.117 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.117 issued rwts: total=3324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.117 filename1: (groupid=0, jobs=1): err= 0: pid=3920351: Fri Apr 26 15:11:54 2024 00:31:10.117 read: IOPS=334, BW=1339KiB/s (1371kB/s)(13.2MiB/10075msec) 00:31:10.117 slat (usec): min=8, max=128, avg=44.30, stdev=17.19 00:31:10.117 clat (msec): min=23, max=368, avg=47.30, stdev=49.23 00:31:10.117 lat (msec): min=23, max=368, avg=47.34, stdev=49.22 00:31:10.117 clat percentiles (msec): 00:31:10.117 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.117 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:10.117 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 44], 95.00th=[ 45], 00:31:10.117 | 99.00th=[ 279], 99.50th=[ 355], 99.90th=[ 368], 99.95th=[ 368], 00:31:10.117 | 99.99th=[ 368] 00:31:10.117 bw ( KiB/s): min= 192, max= 1920, per=4.20%, avg=1344.80, stdev=681.25, samples=20 00:31:10.117 iops : min= 48, max= 480, avg=336.20, stdev=170.31, samples=20 00:31:10.117 lat (msec) : 50=95.31%, 100=0.06%, 250=1.96%, 500=2.67% 00:31:10.117 cpu : usr=96.68%, sys=2.18%, ctx=67, majf=0, minf=29 00:31:10.117 IO depths : 1=5.9%, 2=12.1%, 4=24.6%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:10.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.117 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.117 issued rwts: total=3372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.117 filename2: (groupid=0, jobs=1): err= 0: pid=3920352: Fri Apr 26 15:11:54 2024 00:31:10.117 read: IOPS=335, BW=1340KiB/s (1373kB/s)(13.2MiB/10075msec) 00:31:10.117 slat (usec): min=8, max=136, avg=47.66, stdev=21.37 00:31:10.117 clat (msec): min=31, max=369, avg=47.03, stdev=47.08 00:31:10.117 lat (msec): min=31, max=369, avg=47.08, stdev=47.07 00:31:10.117 clat percentiles (msec): 00:31:10.117 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.117 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:10.118 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 44], 95.00th=[ 45], 00:31:10.118 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 372], 00:31:10.118 | 99.99th=[ 372] 00:31:10.118 bw ( KiB/s): min= 240, max= 1920, per=4.21%, avg=1349.60, stdev=672.91, samples=20 00:31:10.118 iops : min= 60, max= 480, avg=337.40, stdev=168.23, samples=20 00:31:10.118 lat (msec) : 50=95.26%, 250=2.31%, 500=2.43% 00:31:10.118 cpu : usr=97.16%, sys=1.83%, ctx=57, majf=0, minf=28 00:31:10.118 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:10.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.118 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.118 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.118 filename2: (groupid=0, jobs=1): err= 0: pid=3920353: Fri Apr 26 15:11:54 2024 00:31:10.118 read: IOPS=336, BW=1345KiB/s (1377kB/s)(13.2MiB/10087msec) 00:31:10.118 slat (usec): min=8, max=116, avg=31.22, stdev=15.53 00:31:10.118 clat (msec): min=25, max=338, avg=47.25, stdev=46.27 00:31:10.118 lat (msec): min=25, max=338, avg=47.28, stdev=46.27 00:31:10.118 clat percentiles (msec): 00:31:10.118 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.118 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:10.118 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 44], 95.00th=[ 117], 00:31:10.118 | 99.00th=[ 266], 99.50th=[ 271], 99.90th=[ 334], 99.95th=[ 338], 00:31:10.118 | 99.99th=[ 338] 00:31:10.118 bw ( KiB/s): min= 256, max= 1920, per=4.21%, avg=1350.40, stdev=671.53, samples=20 00:31:10.118 iops : min= 64, max= 480, avg=337.60, stdev=167.88, samples=20 00:31:10.118 lat (msec) : 50=94.81%, 250=2.42%, 500=2.77% 00:31:10.118 cpu : usr=97.76%, sys=1.59%, ctx=56, majf=0, minf=40 00:31:10.118 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:10.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.118 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.118 issued rwts: total=3392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.118 filename2: (groupid=0, jobs=1): err= 0: pid=3920354: Fri Apr 26 15:11:54 2024 00:31:10.118 read: IOPS=330, BW=1322KiB/s (1353kB/s)(13.0MiB/10072msec) 00:31:10.118 slat (usec): min=8, max=120, avg=43.74, stdev=17.34 00:31:10.118 clat (msec): min=32, max=413, avg=48.00, stdev=58.64 00:31:10.118 lat (msec): min=32, max=413, avg=48.04, stdev=58.63 00:31:10.118 clat percentiles (msec): 00:31:10.118 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.118 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:10.118 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 44], 95.00th=[ 44], 00:31:10.118 | 99.00th=[ 397], 99.50th=[ 401], 99.90th=[ 414], 99.95th=[ 414], 00:31:10.118 | 99.99th=[ 414] 00:31:10.118 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1324.95, stdev=699.04, samples=20 00:31:10.118 iops : min= 32, max= 480, avg=331.20, stdev=174.74, samples=20 00:31:10.118 lat (msec) : 50=95.67%, 100=0.48%, 250=0.96%, 500=2.88% 00:31:10.118 cpu : usr=96.22%, sys=2.34%, ctx=136, majf=0, minf=36 00:31:10.118 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:10.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.118 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.118 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.118 filename2: (groupid=0, jobs=1): err= 0: pid=3920355: Fri Apr 26 15:11:54 2024 00:31:10.118 read: IOPS=333, BW=1333KiB/s (1365kB/s)(13.1MiB/10087msec) 00:31:10.118 slat (nsec): min=5327, max=92114, avg=29990.89, stdev=12188.69 00:31:10.118 clat (msec): min=32, max=383, avg=47.60, stdev=49.17 00:31:10.118 lat (msec): min=32, max=383, avg=47.63, stdev=49.16 00:31:10.118 clat percentiles (msec): 00:31:10.118 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.118 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:31:10.118 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 44], 95.00th=[ 71], 00:31:10.118 | 99.00th=[ 275], 99.50th=[ 330], 99.90th=[ 384], 99.95th=[ 384], 00:31:10.118 | 99.99th=[ 384] 00:31:10.118 bw ( KiB/s): min= 176, max= 1920, per=4.18%, avg=1338.40, stdev=677.50, samples=20 00:31:10.118 iops : min= 44, max= 480, avg=334.60, stdev=169.37, samples=20 00:31:10.118 lat (msec) : 50=94.71%, 100=0.65%, 250=1.73%, 500=2.91% 00:31:10.118 cpu : usr=98.06%, sys=1.52%, ctx=31, majf=0, minf=36 00:31:10.118 IO depths : 1=6.0%, 2=12.0%, 4=24.2%, 8=51.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:10.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.118 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.118 issued rwts: total=3362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.118 filename2: (groupid=0, jobs=1): err= 0: pid=3920356: Fri Apr 26 15:11:54 2024 00:31:10.118 read: IOPS=339, BW=1357KiB/s (1389kB/s)(13.4MiB/10090msec) 00:31:10.118 slat (nsec): min=6678, max=65054, avg=11643.60, stdev=5761.66 00:31:10.118 clat (msec): min=10, max=380, avg=47.01, stdev=48.98 00:31:10.118 lat (msec): min=10, max=380, avg=47.02, stdev=48.98 00:31:10.118 clat percentiles (msec): 00:31:10.118 | 1.00th=[ 19], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.118 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:31:10.118 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 44], 95.00th=[ 45], 00:31:10.118 | 99.00th=[ 266], 99.50th=[ 342], 99.90th=[ 380], 99.95th=[ 380], 00:31:10.118 | 99.99th=[ 380] 00:31:10.118 bw ( KiB/s): min= 128, max= 1920, per=4.26%, avg=1363.20, stdev=694.01, samples=20 00:31:10.118 iops : min= 32, max= 480, avg=340.80, stdev=173.50, samples=20 00:31:10.118 lat (msec) : 20=1.20%, 50=94.18%, 250=1.29%, 500=3.33% 00:31:10.118 cpu : usr=97.94%, sys=1.66%, ctx=20, majf=0, minf=71 00:31:10.118 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:10.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.118 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.118 issued rwts: total=3422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.118 filename2: (groupid=0, jobs=1): err= 0: pid=3920357: Fri Apr 26 15:11:54 2024 00:31:10.118 read: IOPS=335, BW=1340KiB/s (1372kB/s)(13.2MiB/10076msec) 00:31:10.118 slat (usec): min=8, max=146, avg=42.15, stdev=16.84 00:31:10.118 clat (msec): min=32, max=313, avg=47.07, stdev=46.98 00:31:10.118 lat (msec): min=32, max=313, avg=47.11, stdev=46.97 00:31:10.118 clat percentiles (msec): 00:31:10.118 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.118 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:10.118 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 44], 95.00th=[ 45], 00:31:10.118 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 313], 00:31:10.119 | 99.99th=[ 313] 00:31:10.119 bw ( KiB/s): min= 240, max= 1920, per=4.21%, avg=1349.60, stdev=672.91, samples=20 00:31:10.119 iops : min= 60, max= 480, avg=337.40, stdev=168.23, samples=20 00:31:10.119 lat (msec) : 50=95.20%, 100=0.06%, 250=2.19%, 500=2.55% 00:31:10.119 cpu : usr=98.09%, sys=1.33%, ctx=60, majf=0, minf=25 00:31:10.119 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:10.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.119 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.119 filename2: (groupid=0, jobs=1): err= 0: pid=3920358: Fri Apr 26 15:11:54 2024 00:31:10.119 read: IOPS=336, BW=1345KiB/s (1377kB/s)(13.2MiB/10087msec) 00:31:10.119 slat (usec): min=8, max=115, avg=28.05, stdev=17.06 00:31:10.119 clat (msec): min=24, max=323, avg=47.29, stdev=45.81 00:31:10.119 lat (msec): min=24, max=323, avg=47.32, stdev=45.81 00:31:10.119 clat percentiles (msec): 00:31:10.119 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.119 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:10.119 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 44], 95.00th=[ 176], 00:31:10.119 | 99.00th=[ 264], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 326], 00:31:10.119 | 99.99th=[ 326] 00:31:10.119 bw ( KiB/s): min= 256, max= 1920, per=4.21%, avg=1350.40, stdev=671.53, samples=20 00:31:10.119 iops : min= 64, max= 480, avg=337.60, stdev=167.88, samples=20 00:31:10.119 lat (msec) : 50=94.81%, 250=2.30%, 500=2.89% 00:31:10.119 cpu : usr=97.53%, sys=1.87%, ctx=30, majf=0, minf=29 00:31:10.119 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:10.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.119 issued rwts: total=3392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.119 filename2: (groupid=0, jobs=1): err= 0: pid=3920359: Fri Apr 26 15:11:54 2024 00:31:10.119 read: IOPS=334, BW=1339KiB/s (1371kB/s)(13.2MiB/10084msec) 00:31:10.119 slat (usec): min=4, max=148, avg=43.45, stdev=19.23 00:31:10.119 clat (msec): min=32, max=278, avg=47.37, stdev=46.91 00:31:10.119 lat (msec): min=32, max=278, avg=47.41, stdev=46.91 00:31:10.119 clat percentiles (msec): 00:31:10.119 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:31:10.119 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:31:10.119 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 44], 95.00th=[ 86], 00:31:10.119 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 279], 00:31:10.119 | 99.99th=[ 279] 00:31:10.119 bw ( KiB/s): min= 256, max= 1920, per=4.20%, avg=1344.00, stdev=667.70, samples=20 00:31:10.119 iops : min= 64, max= 480, avg=336.00, stdev=166.92, samples=20 00:31:10.119 lat (msec) : 50=94.31%, 100=0.95%, 250=2.37%, 500=2.37% 00:31:10.119 cpu : usr=94.53%, sys=2.91%, ctx=531, majf=0, minf=32 00:31:10.119 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:10.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.119 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:10.119 00:31:10.119 Run status group 0 (all jobs): 00:31:10.119 READ: bw=31.3MiB/s (32.8MB/s), 1320KiB/s-1357KiB/s (1352kB/s-1389kB/s), io=316MiB (331MB), run=10001-10103msec 00:31:10.119 15:11:54 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:10.119 15:11:54 -- target/dif.sh@43 -- # local sub 00:31:10.119 15:11:54 -- target/dif.sh@45 -- # for sub in "$@" 00:31:10.119 15:11:54 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:10.119 15:11:54 -- target/dif.sh@36 -- # local sub_id=0 00:31:10.119 15:11:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:10.119 15:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.119 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:31:10.119 15:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.119 15:11:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:10.119 15:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.119 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:31:10.119 15:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.119 15:11:54 -- target/dif.sh@45 -- # for sub in "$@" 00:31:10.119 15:11:54 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:10.119 15:11:54 -- target/dif.sh@36 -- # local sub_id=1 00:31:10.119 15:11:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:10.119 15:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.119 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:31:10.119 15:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.119 15:11:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:10.119 15:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.119 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:31:10.119 15:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.119 15:11:54 -- target/dif.sh@45 -- # for sub in "$@" 00:31:10.119 15:11:54 -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:10.119 15:11:54 -- target/dif.sh@36 -- # local sub_id=2 00:31:10.119 15:11:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:10.119 15:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.119 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:31:10.119 15:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.119 15:11:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:10.119 15:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.119 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:31:10.119 15:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.119 15:11:54 -- target/dif.sh@115 -- # NULL_DIF=1 00:31:10.119 15:11:54 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:10.119 15:11:54 -- target/dif.sh@115 -- # numjobs=2 00:31:10.119 15:11:54 -- target/dif.sh@115 -- # iodepth=8 00:31:10.119 15:11:54 -- target/dif.sh@115 -- # runtime=5 00:31:10.119 15:11:54 -- target/dif.sh@115 -- # files=1 00:31:10.119 15:11:54 -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:10.119 15:11:54 -- target/dif.sh@28 -- # local sub 00:31:10.119 15:11:54 -- target/dif.sh@30 -- # for sub in "$@" 00:31:10.119 15:11:54 -- target/dif.sh@31 -- # create_subsystem 0 00:31:10.119 15:11:54 -- target/dif.sh@18 -- # local sub_id=0 00:31:10.119 15:11:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:10.119 15:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.119 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:31:10.119 bdev_null0 00:31:10.119 15:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.119 15:11:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:10.119 15:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.119 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:31:10.119 15:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.119 15:11:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:10.119 15:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.119 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:31:10.119 15:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.119 15:11:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:10.119 15:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.119 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:31:10.119 [2024-04-26 15:11:54.509389] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.119 15:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.119 15:11:54 -- target/dif.sh@30 -- # for sub in "$@" 00:31:10.119 15:11:54 -- target/dif.sh@31 -- # create_subsystem 1 00:31:10.119 15:11:54 -- target/dif.sh@18 -- # local sub_id=1 00:31:10.119 15:11:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:10.119 15:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.119 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:31:10.119 bdev_null1 00:31:10.119 15:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.119 15:11:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:10.119 15:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.119 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:31:10.120 15:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.120 15:11:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:10.120 15:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.120 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:31:10.120 15:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.120 15:11:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:10.120 15:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.120 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:31:10.120 15:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.120 15:11:54 -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:10.120 15:11:54 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:10.120 15:11:54 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:10.120 15:11:54 -- nvmf/common.sh@521 -- # config=() 00:31:10.120 15:11:54 -- nvmf/common.sh@521 -- # local subsystem config 00:31:10.120 15:11:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:10.120 15:11:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:10.120 { 00:31:10.120 "params": { 00:31:10.120 "name": "Nvme$subsystem", 00:31:10.120 "trtype": "$TEST_TRANSPORT", 00:31:10.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.120 "adrfam": "ipv4", 00:31:10.120 "trsvcid": "$NVMF_PORT", 00:31:10.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.120 "hdgst": ${hdgst:-false}, 00:31:10.120 "ddgst": ${ddgst:-false} 00:31:10.120 }, 00:31:10.120 "method": "bdev_nvme_attach_controller" 00:31:10.120 } 00:31:10.120 EOF 00:31:10.120 )") 00:31:10.120 15:11:54 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.120 15:11:54 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.120 15:11:54 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:10.120 15:11:54 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:10.120 15:11:54 -- target/dif.sh@82 -- # gen_fio_conf 00:31:10.120 15:11:54 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:10.120 15:11:54 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.120 15:11:54 -- target/dif.sh@54 -- # local file 00:31:10.120 15:11:54 -- common/autotest_common.sh@1327 -- # shift 00:31:10.120 15:11:54 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:10.120 15:11:54 -- target/dif.sh@56 -- # cat 00:31:10.120 15:11:54 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.120 15:11:54 -- nvmf/common.sh@543 -- # cat 00:31:10.120 15:11:54 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.120 15:11:54 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:10.120 15:11:54 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:10.120 15:11:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:10.120 15:11:54 -- target/dif.sh@72 -- # (( file <= files )) 00:31:10.120 15:11:54 -- target/dif.sh@73 -- # cat 00:31:10.120 15:11:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:10.120 15:11:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:10.120 { 00:31:10.120 "params": { 00:31:10.120 "name": "Nvme$subsystem", 00:31:10.120 "trtype": "$TEST_TRANSPORT", 00:31:10.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.120 "adrfam": "ipv4", 00:31:10.120 "trsvcid": "$NVMF_PORT", 00:31:10.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.120 "hdgst": ${hdgst:-false}, 00:31:10.120 "ddgst": ${ddgst:-false} 00:31:10.120 }, 00:31:10.120 "method": "bdev_nvme_attach_controller" 00:31:10.120 } 00:31:10.120 EOF 00:31:10.120 )") 00:31:10.120 15:11:54 -- nvmf/common.sh@543 -- # cat 00:31:10.120 15:11:54 -- target/dif.sh@72 -- # (( file++ )) 00:31:10.120 15:11:54 -- target/dif.sh@72 -- # (( file <= files )) 00:31:10.120 15:11:54 -- nvmf/common.sh@545 -- # jq . 00:31:10.120 15:11:54 -- nvmf/common.sh@546 -- # IFS=, 00:31:10.120 15:11:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:10.120 "params": { 00:31:10.120 "name": "Nvme0", 00:31:10.120 "trtype": "tcp", 00:31:10.120 "traddr": "10.0.0.2", 00:31:10.120 "adrfam": "ipv4", 00:31:10.120 "trsvcid": "4420", 00:31:10.120 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:10.120 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:10.120 "hdgst": false, 00:31:10.120 "ddgst": false 00:31:10.120 }, 00:31:10.120 "method": "bdev_nvme_attach_controller" 00:31:10.120 },{ 00:31:10.120 "params": { 00:31:10.120 "name": "Nvme1", 00:31:10.120 "trtype": "tcp", 00:31:10.120 "traddr": "10.0.0.2", 00:31:10.120 "adrfam": "ipv4", 00:31:10.120 "trsvcid": "4420", 00:31:10.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:10.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:10.120 "hdgst": false, 00:31:10.120 "ddgst": false 00:31:10.120 }, 00:31:10.120 "method": "bdev_nvme_attach_controller" 00:31:10.120 }' 00:31:10.120 15:11:54 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:10.120 15:11:54 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:10.120 15:11:54 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.120 15:11:54 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.120 15:11:54 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:31:10.120 15:11:54 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:10.120 15:11:54 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:10.120 15:11:54 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:10.120 15:11:54 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:10.120 15:11:54 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.120 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:10.120 ... 00:31:10.120 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:10.120 ... 00:31:10.120 fio-3.35 00:31:10.120 Starting 4 threads 00:31:10.120 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.383 00:31:15.383 filename0: (groupid=0, jobs=1): err= 0: pid=3921737: Fri Apr 26 15:12:00 2024 00:31:15.383 read: IOPS=1898, BW=14.8MiB/s (15.6MB/s)(74.2MiB/5003msec) 00:31:15.383 slat (nsec): min=5089, max=62434, avg=22647.98, stdev=11193.02 00:31:15.383 clat (usec): min=1147, max=7986, avg=4133.03, stdev=412.02 00:31:15.383 lat (usec): min=1164, max=8019, avg=4155.68, stdev=411.55 00:31:15.383 clat percentiles (usec): 00:31:15.383 | 1.00th=[ 2802], 5.00th=[ 3687], 10.00th=[ 3851], 20.00th=[ 3949], 00:31:15.383 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4146], 00:31:15.383 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4621], 00:31:15.383 | 99.00th=[ 5538], 99.50th=[ 6128], 99.90th=[ 7439], 99.95th=[ 7832], 00:31:15.383 | 99.99th=[ 7963] 00:31:15.383 bw ( KiB/s): min=14208, max=16144, per=25.00%, avg=15185.60, stdev=471.72, samples=10 00:31:15.383 iops : min= 1776, max= 2018, avg=1898.20, stdev=58.96, samples=10 00:31:15.383 lat (msec) : 2=0.34%, 4=27.93%, 10=71.73% 00:31:15.383 cpu : usr=95.30%, sys=4.12%, ctx=11, majf=0, minf=70 00:31:15.383 IO depths : 1=0.3%, 2=17.3%, 4=55.9%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:15.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.383 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.383 issued rwts: total=9499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.383 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:15.383 filename0: (groupid=0, jobs=1): err= 0: pid=3921738: Fri Apr 26 15:12:00 2024 00:31:15.383 read: IOPS=1900, BW=14.8MiB/s (15.6MB/s)(74.2MiB/5001msec) 00:31:15.383 slat (nsec): min=6445, max=65456, avg=21114.77, stdev=10234.40 00:31:15.383 clat (usec): min=980, max=8165, avg=4132.95, stdev=453.70 00:31:15.383 lat (usec): min=992, max=8180, avg=4154.07, stdev=453.45 00:31:15.383 clat percentiles (usec): 00:31:15.383 | 1.00th=[ 2835], 5.00th=[ 3621], 10.00th=[ 3818], 20.00th=[ 3949], 00:31:15.383 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4146], 00:31:15.383 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4621], 00:31:15.383 | 99.00th=[ 5800], 99.50th=[ 6521], 99.90th=[ 7767], 99.95th=[ 7832], 00:31:15.383 | 99.99th=[ 8160] 00:31:15.383 bw ( KiB/s): min=14240, max=15760, per=25.08%, avg=15232.00, stdev=433.11, samples=9 00:31:15.383 iops : min= 1780, max= 1970, avg=1904.00, stdev=54.14, samples=9 00:31:15.383 lat (usec) : 1000=0.03% 00:31:15.383 lat (msec) : 2=0.38%, 4=26.19%, 10=73.40% 00:31:15.383 cpu : usr=94.44%, sys=4.88%, ctx=27, majf=0, minf=48 00:31:15.383 IO depths : 1=0.2%, 2=19.1%, 4=54.8%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:15.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.383 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.383 issued rwts: total=9504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.383 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:15.383 filename1: (groupid=0, jobs=1): err= 0: pid=3921739: Fri Apr 26 15:12:00 2024 00:31:15.383 read: IOPS=1886, BW=14.7MiB/s (15.5MB/s)(73.7MiB/5001msec) 00:31:15.383 slat (nsec): min=6530, max=62409, avg=22582.15, stdev=11358.85 00:31:15.383 clat (usec): min=820, max=8164, avg=4154.11, stdev=469.27 00:31:15.383 lat (usec): min=847, max=8178, avg=4176.70, stdev=468.52 00:31:15.383 clat percentiles (usec): 00:31:15.383 | 1.00th=[ 2933], 5.00th=[ 3752], 10.00th=[ 3851], 20.00th=[ 3949], 00:31:15.383 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4146], 00:31:15.383 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4686], 00:31:15.383 | 99.00th=[ 6194], 99.50th=[ 6980], 99.90th=[ 7767], 99.95th=[ 8029], 00:31:15.383 | 99.99th=[ 8160] 00:31:15.383 bw ( KiB/s): min=14000, max=15519, per=24.88%, avg=15114.56, stdev=436.60, samples=9 00:31:15.383 iops : min= 1750, max= 1939, avg=1889.22, stdev=54.47, samples=9 00:31:15.383 lat (usec) : 1000=0.05% 00:31:15.383 lat (msec) : 2=0.38%, 4=27.18%, 10=72.38% 00:31:15.383 cpu : usr=94.80%, sys=4.54%, ctx=58, majf=0, minf=36 00:31:15.383 IO depths : 1=0.4%, 2=15.8%, 4=57.9%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:15.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.383 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.383 issued rwts: total=9436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.383 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:15.383 filename1: (groupid=0, jobs=1): err= 0: pid=3921740: Fri Apr 26 15:12:00 2024 00:31:15.383 read: IOPS=1908, BW=14.9MiB/s (15.6MB/s)(74.6MiB/5002msec) 00:31:15.383 slat (usec): min=4, max=224, avg=18.51, stdev=11.10 00:31:15.383 clat (usec): min=834, max=7934, avg=4132.05, stdev=415.52 00:31:15.383 lat (usec): min=852, max=7947, avg=4150.55, stdev=415.35 00:31:15.383 clat percentiles (usec): 00:31:15.383 | 1.00th=[ 2900], 5.00th=[ 3556], 10.00th=[ 3818], 20.00th=[ 3982], 00:31:15.383 | 30.00th=[ 4047], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4178], 00:31:15.383 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4555], 00:31:15.383 | 99.00th=[ 5604], 99.50th=[ 6128], 99.90th=[ 7111], 99.95th=[ 7308], 00:31:15.383 | 99.99th=[ 7963] 00:31:15.383 bw ( KiB/s): min=14336, max=16096, per=25.13%, avg=15262.20, stdev=452.10, samples=10 00:31:15.383 iops : min= 1792, max= 2012, avg=1907.70, stdev=56.48, samples=10 00:31:15.383 lat (usec) : 1000=0.02% 00:31:15.383 lat (msec) : 2=0.17%, 4=22.96%, 10=76.85% 00:31:15.383 cpu : usr=91.60%, sys=6.22%, ctx=57, majf=0, minf=48 00:31:15.383 IO depths : 1=0.3%, 2=11.6%, 4=62.2%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:15.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.383 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.383 issued rwts: total=9545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.383 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:15.383 00:31:15.383 Run status group 0 (all jobs): 00:31:15.383 READ: bw=59.3MiB/s (62.2MB/s), 14.7MiB/s-14.9MiB/s (15.5MB/s-15.6MB/s), io=297MiB (311MB), run=5001-5003msec 00:31:15.383 15:12:00 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:15.383 15:12:00 -- target/dif.sh@43 -- # local sub 00:31:15.383 15:12:00 -- target/dif.sh@45 -- # for sub in "$@" 00:31:15.384 15:12:00 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:15.384 15:12:00 -- target/dif.sh@36 -- # local sub_id=0 00:31:15.384 15:12:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:15.384 15:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:15.384 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:31:15.384 15:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:15.384 15:12:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:15.384 15:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:15.384 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:31:15.384 15:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:15.384 15:12:00 -- target/dif.sh@45 -- # for sub in "$@" 00:31:15.384 15:12:00 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:15.384 15:12:00 -- target/dif.sh@36 -- # local sub_id=1 00:31:15.384 15:12:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:15.384 15:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:15.384 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:31:15.384 15:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:15.384 15:12:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:15.384 15:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:15.384 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:31:15.384 15:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:15.384 00:31:15.384 real 0m24.277s 00:31:15.384 user 4m31.868s 00:31:15.384 sys 0m7.989s 00:31:15.384 15:12:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:15.384 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:31:15.384 ************************************ 00:31:15.384 END TEST fio_dif_rand_params 00:31:15.384 ************************************ 00:31:15.384 15:12:00 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:15.384 15:12:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:15.384 15:12:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:15.384 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:31:15.384 ************************************ 00:31:15.384 START TEST fio_dif_digest 00:31:15.384 ************************************ 00:31:15.384 15:12:00 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:31:15.384 15:12:00 -- target/dif.sh@123 -- # local NULL_DIF 00:31:15.384 15:12:00 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:15.384 15:12:00 -- target/dif.sh@125 -- # local hdgst ddgst 00:31:15.384 15:12:00 -- target/dif.sh@127 -- # NULL_DIF=3 00:31:15.384 15:12:00 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:15.384 15:12:00 -- target/dif.sh@127 -- # numjobs=3 00:31:15.384 15:12:00 -- target/dif.sh@127 -- # iodepth=3 00:31:15.384 15:12:00 -- target/dif.sh@127 -- # runtime=10 00:31:15.384 15:12:00 -- target/dif.sh@128 -- # hdgst=true 00:31:15.384 15:12:00 -- target/dif.sh@128 -- # ddgst=true 00:31:15.384 15:12:00 -- target/dif.sh@130 -- # create_subsystems 0 00:31:15.384 15:12:00 -- target/dif.sh@28 -- # local sub 00:31:15.384 15:12:00 -- target/dif.sh@30 -- # for sub in "$@" 00:31:15.384 15:12:00 -- target/dif.sh@31 -- # create_subsystem 0 00:31:15.384 15:12:00 -- target/dif.sh@18 -- # local sub_id=0 00:31:15.384 15:12:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:15.384 15:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:15.384 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:31:15.384 bdev_null0 00:31:15.384 15:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:15.384 15:12:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:15.384 15:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:15.384 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:31:15.384 15:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:15.384 15:12:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:15.384 15:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:15.384 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:31:15.384 15:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:15.384 15:12:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:15.384 15:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:15.384 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:31:15.384 [2024-04-26 15:12:00.991352] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.384 15:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:15.384 15:12:00 -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:15.384 15:12:00 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:15.384 15:12:00 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:15.384 15:12:00 -- nvmf/common.sh@521 -- # config=() 00:31:15.384 15:12:00 -- nvmf/common.sh@521 -- # local subsystem config 00:31:15.384 15:12:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:15.384 15:12:00 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:15.384 15:12:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:15.384 { 00:31:15.384 "params": { 00:31:15.384 "name": "Nvme$subsystem", 00:31:15.384 "trtype": "$TEST_TRANSPORT", 00:31:15.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:15.384 "adrfam": "ipv4", 00:31:15.384 "trsvcid": "$NVMF_PORT", 00:31:15.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:15.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:15.384 "hdgst": ${hdgst:-false}, 00:31:15.384 "ddgst": ${ddgst:-false} 00:31:15.384 }, 00:31:15.384 "method": "bdev_nvme_attach_controller" 00:31:15.384 } 00:31:15.384 EOF 00:31:15.384 )") 00:31:15.384 15:12:00 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:15.384 15:12:00 -- target/dif.sh@82 -- # gen_fio_conf 00:31:15.384 15:12:00 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:15.384 15:12:00 -- target/dif.sh@54 -- # local file 00:31:15.384 15:12:00 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:15.384 15:12:00 -- target/dif.sh@56 -- # cat 00:31:15.384 15:12:00 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:15.384 15:12:00 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:15.384 15:12:00 -- common/autotest_common.sh@1327 -- # shift 00:31:15.384 15:12:00 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:15.384 15:12:00 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:15.384 15:12:00 -- nvmf/common.sh@543 -- # cat 00:31:15.384 15:12:00 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:15.384 15:12:00 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:15.384 15:12:00 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:15.384 15:12:00 -- target/dif.sh@72 -- # (( file <= files )) 00:31:15.384 15:12:00 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:15.384 15:12:01 -- nvmf/common.sh@545 -- # jq . 00:31:15.384 15:12:01 -- nvmf/common.sh@546 -- # IFS=, 00:31:15.384 15:12:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:15.384 "params": { 00:31:15.384 "name": "Nvme0", 00:31:15.384 "trtype": "tcp", 00:31:15.384 "traddr": "10.0.0.2", 00:31:15.384 "adrfam": "ipv4", 00:31:15.384 "trsvcid": "4420", 00:31:15.384 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:15.384 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:15.384 "hdgst": true, 00:31:15.384 "ddgst": true 00:31:15.384 }, 00:31:15.384 "method": "bdev_nvme_attach_controller" 00:31:15.384 }' 00:31:15.384 15:12:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:15.384 15:12:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:15.384 15:12:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:15.384 15:12:01 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:15.384 15:12:01 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:31:15.384 15:12:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:15.384 15:12:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:15.384 15:12:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:15.384 15:12:01 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:15.384 15:12:01 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:15.642 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:15.642 ... 00:31:15.642 fio-3.35 00:31:15.642 Starting 3 threads 00:31:15.642 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.837 00:31:27.837 filename0: (groupid=0, jobs=1): err= 0: pid=3922711: Fri Apr 26 15:12:11 2024 00:31:27.837 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(254MiB/10047msec) 00:31:27.837 slat (nsec): min=4604, max=37398, avg=13730.21, stdev=3300.74 00:31:27.837 clat (usec): min=9055, max=55510, avg=14808.15, stdev=1776.54 00:31:27.837 lat (usec): min=9068, max=55524, avg=14821.88, stdev=1776.62 00:31:27.837 clat percentiles (usec): 00:31:27.837 | 1.00th=[10290], 5.00th=[12780], 10.00th=[13435], 20.00th=[13829], 00:31:27.837 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:31:27.837 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16319], 95.00th=[16909], 00:31:27.837 | 99.00th=[17695], 99.50th=[18220], 99.90th=[22676], 99.95th=[48497], 00:31:27.837 | 99.99th=[55313] 00:31:27.837 bw ( KiB/s): min=24576, max=28160, per=33.03%, avg=25948.10, stdev=782.83, samples=20 00:31:27.837 iops : min= 192, max= 220, avg=202.70, stdev= 6.13, samples=20 00:31:27.837 lat (msec) : 10=0.54%, 20=99.21%, 50=0.20%, 100=0.05% 00:31:27.837 cpu : usr=89.49%, sys=10.00%, ctx=27, majf=0, minf=113 00:31:27.837 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.837 issued rwts: total=2030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.837 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:27.837 filename0: (groupid=0, jobs=1): err= 0: pid=3922712: Fri Apr 26 15:12:11 2024 00:31:27.837 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(248MiB/10046msec) 00:31:27.837 slat (nsec): min=4659, max=48617, avg=13809.47, stdev=3203.98 00:31:27.837 clat (usec): min=11176, max=58155, avg=15158.71, stdev=4180.61 00:31:27.837 lat (usec): min=11189, max=58169, avg=15172.52, stdev=4180.55 00:31:27.837 clat percentiles (usec): 00:31:27.837 | 1.00th=[12256], 5.00th=[13042], 10.00th=[13435], 20.00th=[13829], 00:31:27.837 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:31:27.837 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:31:27.837 | 99.00th=[48497], 99.50th=[55313], 99.90th=[57410], 99.95th=[57934], 00:31:27.837 | 99.99th=[57934] 00:31:27.837 bw ( KiB/s): min=21504, max=26624, per=32.27%, avg=25356.80, stdev=1408.24, samples=20 00:31:27.837 iops : min= 168, max= 208, avg=198.10, stdev=11.00, samples=20 00:31:27.837 lat (msec) : 20=98.84%, 50=0.20%, 100=0.96% 00:31:27.837 cpu : usr=89.84%, sys=9.67%, ctx=24, majf=0, minf=88 00:31:27.837 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.837 issued rwts: total=1983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.837 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:27.837 filename0: (groupid=0, jobs=1): err= 0: pid=3922713: Fri Apr 26 15:12:11 2024 00:31:27.837 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(269MiB/10046msec) 00:31:27.837 slat (nsec): min=4906, max=55475, avg=14144.88, stdev=3819.00 00:31:27.837 clat (usec): min=8286, max=50247, avg=13934.90, stdev=1492.33 00:31:27.837 lat (usec): min=8299, max=50260, avg=13949.04, stdev=1492.38 00:31:27.837 clat percentiles (usec): 00:31:27.837 | 1.00th=[ 9503], 5.00th=[11863], 10.00th=[12518], 20.00th=[13173], 00:31:27.837 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:31:27.837 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15270], 95.00th=[15664], 00:31:27.837 | 99.00th=[16581], 99.50th=[17171], 99.90th=[21627], 99.95th=[21627], 00:31:27.837 | 99.99th=[50070] 00:31:27.837 bw ( KiB/s): min=25856, max=29184, per=35.04%, avg=27532.80, stdev=772.37, samples=20 00:31:27.837 iops : min= 202, max= 228, avg=215.10, stdev= 6.03, samples=20 00:31:27.837 lat (msec) : 10=1.72%, 20=98.10%, 50=0.14%, 100=0.05% 00:31:27.837 cpu : usr=88.72%, sys=10.77%, ctx=31, majf=0, minf=173 00:31:27.837 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.837 issued rwts: total=2154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.837 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:27.837 00:31:27.837 Run status group 0 (all jobs): 00:31:27.837 READ: bw=76.7MiB/s (80.5MB/s), 24.7MiB/s-26.8MiB/s (25.9MB/s-28.1MB/s), io=771MiB (808MB), run=10046-10047msec 00:31:27.837 15:12:12 -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:27.837 15:12:12 -- target/dif.sh@43 -- # local sub 00:31:27.837 15:12:12 -- target/dif.sh@45 -- # for sub in "$@" 00:31:27.837 15:12:12 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:27.837 15:12:12 -- target/dif.sh@36 -- # local sub_id=0 00:31:27.837 15:12:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:27.837 15:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:27.837 15:12:12 -- common/autotest_common.sh@10 -- # set +x 00:31:27.837 15:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:27.837 15:12:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:27.837 15:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:27.837 15:12:12 -- common/autotest_common.sh@10 -- # set +x 00:31:27.837 15:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:27.837 00:31:27.837 real 0m11.158s 00:31:27.837 user 0m28.072s 00:31:27.837 sys 0m3.329s 00:31:27.837 15:12:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:27.837 15:12:12 -- common/autotest_common.sh@10 -- # set +x 00:31:27.837 ************************************ 00:31:27.837 END TEST fio_dif_digest 00:31:27.837 ************************************ 00:31:27.837 15:12:12 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:27.837 15:12:12 -- target/dif.sh@147 -- # nvmftestfini 00:31:27.837 15:12:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:27.837 15:12:12 -- nvmf/common.sh@117 -- # sync 00:31:27.837 15:12:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:27.837 15:12:12 -- nvmf/common.sh@120 -- # set +e 00:31:27.837 15:12:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:27.837 15:12:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:27.837 rmmod nvme_tcp 00:31:27.837 rmmod nvme_fabrics 00:31:27.837 rmmod nvme_keyring 00:31:27.837 15:12:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:27.837 15:12:12 -- nvmf/common.sh@124 -- # set -e 00:31:27.837 15:12:12 -- nvmf/common.sh@125 -- # return 0 00:31:27.837 15:12:12 -- nvmf/common.sh@478 -- # '[' -n 3916432 ']' 00:31:27.837 15:12:12 -- nvmf/common.sh@479 -- # killprocess 3916432 00:31:27.837 15:12:12 -- common/autotest_common.sh@936 -- # '[' -z 3916432 ']' 00:31:27.837 15:12:12 -- common/autotest_common.sh@940 -- # kill -0 3916432 00:31:27.837 15:12:12 -- common/autotest_common.sh@941 -- # uname 00:31:27.837 15:12:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:27.837 15:12:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3916432 00:31:27.837 15:12:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:27.837 15:12:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:27.837 15:12:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3916432' 00:31:27.837 killing process with pid 3916432 00:31:27.837 15:12:12 -- common/autotest_common.sh@955 -- # kill 3916432 00:31:27.837 15:12:12 -- common/autotest_common.sh@960 -- # wait 3916432 00:31:27.837 15:12:12 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:31:27.837 15:12:12 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:27.837 Waiting for block devices as requested 00:31:27.837 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:31:28.096 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:28.096 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:28.096 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:28.096 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:28.355 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:28.355 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:28.355 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:28.355 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:28.355 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:28.613 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:28.613 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:28.613 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:28.873 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:28.873 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:28.873 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:28.873 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:29.132 15:12:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:29.132 15:12:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:29.132 15:12:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:29.132 15:12:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:29.132 15:12:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.132 15:12:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:29.132 15:12:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.073 15:12:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:31.073 00:31:31.073 real 1m6.551s 00:31:31.073 user 6m26.640s 00:31:31.073 sys 0m21.216s 00:31:31.073 15:12:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:31.073 15:12:16 -- common/autotest_common.sh@10 -- # set +x 00:31:31.073 ************************************ 00:31:31.073 END TEST nvmf_dif 00:31:31.073 ************************************ 00:31:31.073 15:12:16 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:31.073 15:12:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:31.073 15:12:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:31.073 15:12:16 -- common/autotest_common.sh@10 -- # set +x 00:31:31.332 ************************************ 00:31:31.332 START TEST nvmf_abort_qd_sizes 00:31:31.332 ************************************ 00:31:31.332 15:12:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:31.332 * Looking for test storage... 00:31:31.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:31.332 15:12:16 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:31.332 15:12:16 -- nvmf/common.sh@7 -- # uname -s 00:31:31.332 15:12:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:31.332 15:12:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:31.332 15:12:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:31.332 15:12:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:31.332 15:12:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:31.332 15:12:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:31.332 15:12:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:31.332 15:12:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:31.332 15:12:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:31.332 15:12:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:31.332 15:12:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:31.332 15:12:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:31.332 15:12:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:31.332 15:12:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:31.332 15:12:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:31.332 15:12:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:31.332 15:12:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:31.332 15:12:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:31.332 15:12:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:31.332 15:12:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:31.332 15:12:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.332 15:12:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.332 15:12:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.332 15:12:16 -- paths/export.sh@5 -- # export PATH 00:31:31.332 15:12:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.332 15:12:16 -- nvmf/common.sh@47 -- # : 0 00:31:31.332 15:12:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:31.332 15:12:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:31.332 15:12:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:31.332 15:12:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:31.332 15:12:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:31.332 15:12:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:31.332 15:12:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:31.332 15:12:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:31.332 15:12:16 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:31.332 15:12:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:31.332 15:12:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:31.332 15:12:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:31.332 15:12:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:31.332 15:12:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:31.332 15:12:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.332 15:12:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:31.332 15:12:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.332 15:12:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:31:31.332 15:12:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:31:31.332 15:12:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:31.332 15:12:16 -- common/autotest_common.sh@10 -- # set +x 00:31:33.234 15:12:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:33.234 15:12:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:33.234 15:12:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:33.234 15:12:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:33.234 15:12:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:33.234 15:12:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:33.234 15:12:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:33.234 15:12:18 -- nvmf/common.sh@295 -- # net_devs=() 00:31:33.234 15:12:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:33.234 15:12:18 -- nvmf/common.sh@296 -- # e810=() 00:31:33.234 15:12:18 -- nvmf/common.sh@296 -- # local -ga e810 00:31:33.234 15:12:18 -- nvmf/common.sh@297 -- # x722=() 00:31:33.234 15:12:18 -- nvmf/common.sh@297 -- # local -ga x722 00:31:33.234 15:12:18 -- nvmf/common.sh@298 -- # mlx=() 00:31:33.234 15:12:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:33.234 15:12:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:33.234 15:12:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:33.234 15:12:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:33.234 15:12:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:33.234 15:12:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:33.234 15:12:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:33.234 15:12:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:33.234 15:12:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:33.234 15:12:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:33.234 15:12:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:33.234 15:12:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:33.234 15:12:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:33.234 15:12:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:33.234 15:12:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:33.234 15:12:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:33.234 15:12:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:33.234 15:12:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:33.234 15:12:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:33.234 15:12:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:33.234 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:33.234 15:12:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:33.234 15:12:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:33.234 15:12:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.234 15:12:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.234 15:12:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:33.234 15:12:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:33.234 15:12:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:33.234 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:33.234 15:12:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:33.234 15:12:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:33.234 15:12:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.234 15:12:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.234 15:12:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:33.234 15:12:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:33.234 15:12:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:33.234 15:12:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:33.234 15:12:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:33.234 15:12:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.234 15:12:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:33.234 15:12:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.234 15:12:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:33.234 Found net devices under 0000:84:00.0: cvl_0_0 00:31:33.234 15:12:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.234 15:12:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:33.234 15:12:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.234 15:12:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:33.234 15:12:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.234 15:12:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:33.234 Found net devices under 0000:84:00.1: cvl_0_1 00:31:33.234 15:12:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.234 15:12:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:31:33.234 15:12:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:31:33.234 15:12:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:31:33.234 15:12:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:31:33.234 15:12:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:31:33.234 15:12:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:33.234 15:12:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:33.234 15:12:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:33.234 15:12:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:33.234 15:12:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:33.234 15:12:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:33.234 15:12:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:33.234 15:12:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:33.234 15:12:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:33.234 15:12:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:33.234 15:12:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:33.234 15:12:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:33.234 15:12:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:33.493 15:12:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:33.493 15:12:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:33.493 15:12:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:33.493 15:12:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:33.493 15:12:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:33.493 15:12:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:33.493 15:12:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:33.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:33.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:31:33.493 00:31:33.493 --- 10.0.0.2 ping statistics --- 00:31:33.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.493 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:31:33.493 15:12:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:33.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:33.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:31:33.494 00:31:33.494 --- 10.0.0.1 ping statistics --- 00:31:33.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.494 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:31:33.494 15:12:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:33.494 15:12:19 -- nvmf/common.sh@411 -- # return 0 00:31:33.494 15:12:19 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:31:33.494 15:12:19 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:34.430 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:34.430 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:34.430 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:34.430 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:34.430 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:34.430 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:34.430 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:34.430 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:34.430 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:34.430 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:34.430 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:34.430 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:34.430 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:34.430 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:34.430 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:34.430 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:35.810 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:31:35.810 15:12:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:35.810 15:12:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:35.810 15:12:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:35.810 15:12:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:35.810 15:12:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:35.810 15:12:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:35.810 15:12:21 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:35.810 15:12:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:35.810 15:12:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:35.810 15:12:21 -- common/autotest_common.sh@10 -- # set +x 00:31:35.810 15:12:21 -- nvmf/common.sh@470 -- # nvmfpid=3928054 00:31:35.810 15:12:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:35.810 15:12:21 -- nvmf/common.sh@471 -- # waitforlisten 3928054 00:31:35.810 15:12:21 -- common/autotest_common.sh@817 -- # '[' -z 3928054 ']' 00:31:35.810 15:12:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.810 15:12:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:35.810 15:12:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.810 15:12:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:35.810 15:12:21 -- common/autotest_common.sh@10 -- # set +x 00:31:35.810 [2024-04-26 15:12:21.303989] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:31:35.810 [2024-04-26 15:12:21.304102] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.810 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.810 [2024-04-26 15:12:21.341528] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:35.810 [2024-04-26 15:12:21.378351] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:35.810 [2024-04-26 15:12:21.469548] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.810 [2024-04-26 15:12:21.469614] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.810 [2024-04-26 15:12:21.469643] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.810 [2024-04-26 15:12:21.469660] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.810 [2024-04-26 15:12:21.469676] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.810 [2024-04-26 15:12:21.469751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.810 [2024-04-26 15:12:21.469822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:35.810 [2024-04-26 15:12:21.469882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:35.810 [2024-04-26 15:12:21.469888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.069 15:12:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:36.069 15:12:21 -- common/autotest_common.sh@850 -- # return 0 00:31:36.069 15:12:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:36.069 15:12:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:36.069 15:12:21 -- common/autotest_common.sh@10 -- # set +x 00:31:36.069 15:12:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.069 15:12:21 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:36.069 15:12:21 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:36.069 15:12:21 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:36.069 15:12:21 -- scripts/common.sh@309 -- # local bdf bdfs 00:31:36.069 15:12:21 -- scripts/common.sh@310 -- # local nvmes 00:31:36.069 15:12:21 -- scripts/common.sh@312 -- # [[ -n 0000:82:00.0 ]] 00:31:36.069 15:12:21 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:36.069 15:12:21 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:36.069 15:12:21 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:31:36.069 15:12:21 -- scripts/common.sh@320 -- # uname -s 00:31:36.069 15:12:21 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:36.069 15:12:21 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:36.069 15:12:21 -- scripts/common.sh@325 -- # (( 1 )) 00:31:36.069 15:12:21 -- scripts/common.sh@326 -- # printf '%s\n' 0000:82:00.0 00:31:36.069 15:12:21 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:36.069 15:12:21 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:31:36.069 15:12:21 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:36.069 15:12:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:36.069 15:12:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:36.069 15:12:21 -- common/autotest_common.sh@10 -- # set +x 00:31:36.069 ************************************ 00:31:36.069 START TEST spdk_target_abort 00:31:36.069 ************************************ 00:31:36.069 15:12:21 -- common/autotest_common.sh@1111 -- # spdk_target 00:31:36.069 15:12:21 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:36.069 15:12:21 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:31:36.069 15:12:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.069 15:12:21 -- common/autotest_common.sh@10 -- # set +x 00:31:39.348 spdk_targetn1 00:31:39.348 15:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.348 15:12:24 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:39.348 15:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.348 15:12:24 -- common/autotest_common.sh@10 -- # set +x 00:31:39.348 [2024-04-26 15:12:24.592192] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:39.348 15:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.348 15:12:24 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:39.348 15:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.348 15:12:24 -- common/autotest_common.sh@10 -- # set +x 00:31:39.348 15:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.348 15:12:24 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:39.348 15:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.348 15:12:24 -- common/autotest_common.sh@10 -- # set +x 00:31:39.348 15:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.348 15:12:24 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:39.348 15:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.348 15:12:24 -- common/autotest_common.sh@10 -- # set +x 00:31:39.348 [2024-04-26 15:12:24.624477] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:39.348 15:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.348 15:12:24 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:39.348 15:12:24 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:39.348 15:12:24 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:39.348 15:12:24 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:39.348 15:12:24 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:39.348 15:12:24 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:39.348 15:12:24 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:39.348 15:12:24 -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:39.348 15:12:24 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:39.348 15:12:24 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.348 15:12:24 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:39.348 15:12:24 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.348 15:12:24 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:39.348 15:12:24 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.348 15:12:24 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:39.349 15:12:24 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.349 15:12:24 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:39.349 15:12:24 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.349 15:12:24 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:39.349 15:12:24 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:39.349 15:12:24 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:39.349 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.628 Initializing NVMe Controllers 00:31:42.628 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:42.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:42.628 Initialization complete. Launching workers. 00:31:42.628 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10507, failed: 0 00:31:42.628 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1282, failed to submit 9225 00:31:42.628 success 696, unsuccess 586, failed 0 00:31:42.628 15:12:27 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:42.628 15:12:27 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:42.628 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.904 [2024-04-26 15:12:31.184058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23628c0 is same with the state(5) to be set 00:31:45.904 [2024-04-26 15:12:31.184128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23628c0 is same with the state(5) to be set 00:31:45.904 [2024-04-26 15:12:31.184143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23628c0 is same with the state(5) to be set 00:31:45.904 [2024-04-26 15:12:31.184156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23628c0 is same with the state(5) to be set 00:31:45.904 [2024-04-26 15:12:31.184168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23628c0 is same with the state(5) to be set 00:31:45.904 [2024-04-26 15:12:31.184180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23628c0 is same with the state(5) to be set 00:31:45.904 [2024-04-26 15:12:31.184192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23628c0 is same with the state(5) to be set 00:31:45.904 [2024-04-26 15:12:31.184204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23628c0 is same with the state(5) to be set 00:31:45.904 [2024-04-26 15:12:31.184216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23628c0 is same with the state(5) to be set 00:31:45.904 [2024-04-26 15:12:31.184228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23628c0 is same with the state(5) to be set 00:31:45.904 [2024-04-26 15:12:31.184241] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23628c0 is same with the state(5) to be set 00:31:45.904 [2024-04-26 15:12:31.184253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23628c0 is same with the state(5) to be set 00:31:45.904 Initializing NVMe Controllers 00:31:45.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:45.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:45.904 Initialization complete. Launching workers. 00:31:45.904 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8692, failed: 0 00:31:45.904 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1228, failed to submit 7464 00:31:45.904 success 332, unsuccess 896, failed 0 00:31:45.904 15:12:31 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:45.904 15:12:31 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:45.904 EAL: No free 2048 kB hugepages reported on node 1 00:31:49.176 Initializing NVMe Controllers 00:31:49.176 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:49.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:49.176 Initialization complete. Launching workers. 00:31:49.176 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31395, failed: 0 00:31:49.176 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2690, failed to submit 28705 00:31:49.176 success 522, unsuccess 2168, failed 0 00:31:49.176 15:12:34 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:49.176 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.176 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:31:49.176 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.176 15:12:34 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:49.176 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.176 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:31:50.108 15:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.108 15:12:35 -- target/abort_qd_sizes.sh@61 -- # killprocess 3928054 00:31:50.108 15:12:35 -- common/autotest_common.sh@936 -- # '[' -z 3928054 ']' 00:31:50.108 15:12:35 -- common/autotest_common.sh@940 -- # kill -0 3928054 00:31:50.108 15:12:35 -- common/autotest_common.sh@941 -- # uname 00:31:50.373 15:12:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:50.373 15:12:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3928054 00:31:50.373 15:12:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:50.373 15:12:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:50.373 15:12:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3928054' 00:31:50.373 killing process with pid 3928054 00:31:50.373 15:12:35 -- common/autotest_common.sh@955 -- # kill 3928054 00:31:50.373 15:12:35 -- common/autotest_common.sh@960 -- # wait 3928054 00:31:50.373 00:31:50.373 real 0m14.350s 00:31:50.373 user 0m54.485s 00:31:50.373 sys 0m2.983s 00:31:50.373 15:12:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:50.373 15:12:36 -- common/autotest_common.sh@10 -- # set +x 00:31:50.373 ************************************ 00:31:50.373 END TEST spdk_target_abort 00:31:50.373 ************************************ 00:31:50.688 15:12:36 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:50.688 15:12:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:50.688 15:12:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:50.688 15:12:36 -- common/autotest_common.sh@10 -- # set +x 00:31:50.688 ************************************ 00:31:50.688 START TEST kernel_target_abort 00:31:50.688 ************************************ 00:31:50.688 15:12:36 -- common/autotest_common.sh@1111 -- # kernel_target 00:31:50.688 15:12:36 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:50.688 15:12:36 -- nvmf/common.sh@717 -- # local ip 00:31:50.688 15:12:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:50.688 15:12:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:50.688 15:12:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.688 15:12:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.688 15:12:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:50.688 15:12:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.688 15:12:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:50.688 15:12:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:50.688 15:12:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:50.688 15:12:36 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:50.688 15:12:36 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:50.688 15:12:36 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:31:50.688 15:12:36 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:50.688 15:12:36 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:50.688 15:12:36 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:50.688 15:12:36 -- nvmf/common.sh@628 -- # local block nvme 00:31:50.688 15:12:36 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:31:50.688 15:12:36 -- nvmf/common.sh@631 -- # modprobe nvmet 00:31:50.688 15:12:36 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:50.688 15:12:36 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:51.624 Waiting for block devices as requested 00:31:51.884 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:31:51.884 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:51.884 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:51.884 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:52.143 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:52.143 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:52.143 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:52.143 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:52.143 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:52.402 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:52.402 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:52.402 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:52.662 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:52.662 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:52.662 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:52.662 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:52.922 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:52.922 15:12:38 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:31:52.922 15:12:38 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:52.922 15:12:38 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:31:52.922 15:12:38 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:52.922 15:12:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:52.922 15:12:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:52.922 15:12:38 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:31:52.922 15:12:38 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:52.922 15:12:38 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:52.922 No valid GPT data, bailing 00:31:52.922 15:12:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:52.922 15:12:38 -- scripts/common.sh@391 -- # pt= 00:31:52.922 15:12:38 -- scripts/common.sh@392 -- # return 1 00:31:52.922 15:12:38 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:31:52.922 15:12:38 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:31:52.922 15:12:38 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:52.922 15:12:38 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:52.922 15:12:38 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:52.922 15:12:38 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:52.922 15:12:38 -- nvmf/common.sh@656 -- # echo 1 00:31:52.922 15:12:38 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:31:52.922 15:12:38 -- nvmf/common.sh@658 -- # echo 1 00:31:52.922 15:12:38 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:31:52.922 15:12:38 -- nvmf/common.sh@661 -- # echo tcp 00:31:52.922 15:12:38 -- nvmf/common.sh@662 -- # echo 4420 00:31:52.922 15:12:38 -- nvmf/common.sh@663 -- # echo ipv4 00:31:52.922 15:12:38 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:52.922 15:12:38 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:31:52.922 00:31:52.922 Discovery Log Number of Records 2, Generation counter 2 00:31:52.922 =====Discovery Log Entry 0====== 00:31:52.922 trtype: tcp 00:31:52.922 adrfam: ipv4 00:31:52.922 subtype: current discovery subsystem 00:31:52.922 treq: not specified, sq flow control disable supported 00:31:52.922 portid: 1 00:31:52.922 trsvcid: 4420 00:31:52.922 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:52.922 traddr: 10.0.0.1 00:31:52.922 eflags: none 00:31:52.922 sectype: none 00:31:52.922 =====Discovery Log Entry 1====== 00:31:52.922 trtype: tcp 00:31:52.922 adrfam: ipv4 00:31:52.922 subtype: nvme subsystem 00:31:52.922 treq: not specified, sq flow control disable supported 00:31:52.922 portid: 1 00:31:52.922 trsvcid: 4420 00:31:52.922 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:52.922 traddr: 10.0.0.1 00:31:52.922 eflags: none 00:31:52.922 sectype: none 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:52.922 15:12:38 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:52.922 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.239 Initializing NVMe Controllers 00:31:56.239 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:56.239 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:56.240 Initialization complete. Launching workers. 00:31:56.240 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 40435, failed: 0 00:31:56.240 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 40435, failed to submit 0 00:31:56.240 success 0, unsuccess 40435, failed 0 00:31:56.240 15:12:41 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:56.240 15:12:41 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:56.240 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.521 Initializing NVMe Controllers 00:31:59.521 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:59.521 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:59.521 Initialization complete. Launching workers. 00:31:59.521 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79442, failed: 0 00:31:59.521 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20014, failed to submit 59428 00:31:59.521 success 0, unsuccess 20014, failed 0 00:31:59.521 15:12:44 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:59.521 15:12:44 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:59.521 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.803 Initializing NVMe Controllers 00:32:02.803 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:02.803 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:02.803 Initialization complete. Launching workers. 00:32:02.803 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75123, failed: 0 00:32:02.803 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18746, failed to submit 56377 00:32:02.803 success 0, unsuccess 18746, failed 0 00:32:02.803 15:12:47 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:02.803 15:12:47 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:02.803 15:12:47 -- nvmf/common.sh@675 -- # echo 0 00:32:02.803 15:12:47 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:02.803 15:12:47 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:02.803 15:12:47 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:02.803 15:12:47 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:02.803 15:12:47 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:32:02.803 15:12:47 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:32:02.803 15:12:47 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:03.368 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:03.368 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:03.368 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:03.368 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:03.368 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:03.368 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:03.368 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:03.368 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:03.368 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:03.368 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:03.368 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:03.368 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:03.368 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:03.368 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:03.368 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:03.368 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:04.304 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:32:04.562 00:32:04.562 real 0m13.840s 00:32:04.562 user 0m5.918s 00:32:04.562 sys 0m3.103s 00:32:04.562 15:12:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:04.562 15:12:50 -- common/autotest_common.sh@10 -- # set +x 00:32:04.562 ************************************ 00:32:04.562 END TEST kernel_target_abort 00:32:04.562 ************************************ 00:32:04.562 15:12:50 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:04.562 15:12:50 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:04.562 15:12:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:32:04.562 15:12:50 -- nvmf/common.sh@117 -- # sync 00:32:04.562 15:12:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:04.562 15:12:50 -- nvmf/common.sh@120 -- # set +e 00:32:04.562 15:12:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:04.562 15:12:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:04.562 rmmod nvme_tcp 00:32:04.562 rmmod nvme_fabrics 00:32:04.562 rmmod nvme_keyring 00:32:04.562 15:12:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:04.562 15:12:50 -- nvmf/common.sh@124 -- # set -e 00:32:04.562 15:12:50 -- nvmf/common.sh@125 -- # return 0 00:32:04.562 15:12:50 -- nvmf/common.sh@478 -- # '[' -n 3928054 ']' 00:32:04.562 15:12:50 -- nvmf/common.sh@479 -- # killprocess 3928054 00:32:04.562 15:12:50 -- common/autotest_common.sh@936 -- # '[' -z 3928054 ']' 00:32:04.562 15:12:50 -- common/autotest_common.sh@940 -- # kill -0 3928054 00:32:04.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3928054) - No such process 00:32:04.562 15:12:50 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3928054 is not found' 00:32:04.562 Process with pid 3928054 is not found 00:32:04.562 15:12:50 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:32:04.562 15:12:50 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:05.497 Waiting for block devices as requested 00:32:05.497 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:32:05.497 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:05.497 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:05.755 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:05.755 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:05.755 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:05.755 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:06.015 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:06.015 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:06.015 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:06.015 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:06.273 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:06.273 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:06.273 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:06.273 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:06.531 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:06.531 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:06.531 15:12:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:32:06.531 15:12:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:32:06.531 15:12:52 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:06.531 15:12:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:06.531 15:12:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.531 15:12:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:06.531 15:12:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.091 15:12:54 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:09.091 00:32:09.091 real 0m37.416s 00:32:09.091 user 1m2.388s 00:32:09.091 sys 0m9.307s 00:32:09.091 15:12:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:09.091 15:12:54 -- common/autotest_common.sh@10 -- # set +x 00:32:09.091 ************************************ 00:32:09.091 END TEST nvmf_abort_qd_sizes 00:32:09.091 ************************************ 00:32:09.091 15:12:54 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:09.091 15:12:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:09.091 15:12:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:09.091 15:12:54 -- common/autotest_common.sh@10 -- # set +x 00:32:09.091 ************************************ 00:32:09.091 START TEST keyring_file 00:32:09.091 ************************************ 00:32:09.091 15:12:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:09.091 * Looking for test storage... 00:32:09.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:09.091 15:12:54 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:09.091 15:12:54 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:09.091 15:12:54 -- nvmf/common.sh@7 -- # uname -s 00:32:09.091 15:12:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:09.091 15:12:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:09.091 15:12:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:09.091 15:12:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:09.091 15:12:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:09.091 15:12:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:09.091 15:12:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:09.091 15:12:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:09.091 15:12:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:09.091 15:12:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:09.091 15:12:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:09.091 15:12:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:09.091 15:12:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:09.091 15:12:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:09.091 15:12:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:09.091 15:12:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:09.091 15:12:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:09.091 15:12:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.091 15:12:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.091 15:12:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.091 15:12:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.091 15:12:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.091 15:12:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.091 15:12:54 -- paths/export.sh@5 -- # export PATH 00:32:09.091 15:12:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.091 15:12:54 -- nvmf/common.sh@47 -- # : 0 00:32:09.091 15:12:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:09.091 15:12:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:09.091 15:12:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:09.091 15:12:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:09.091 15:12:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:09.091 15:12:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:09.091 15:12:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:09.091 15:12:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:09.091 15:12:54 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:09.091 15:12:54 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:09.091 15:12:54 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:09.091 15:12:54 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:09.091 15:12:54 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:09.091 15:12:54 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:09.091 15:12:54 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:09.091 15:12:54 -- keyring/common.sh@15 -- # local name key digest path 00:32:09.091 15:12:54 -- keyring/common.sh@17 -- # name=key0 00:32:09.091 15:12:54 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:09.091 15:12:54 -- keyring/common.sh@17 -- # digest=0 00:32:09.091 15:12:54 -- keyring/common.sh@18 -- # mktemp 00:32:09.091 15:12:54 -- keyring/common.sh@18 -- # path=/tmp/tmp.SUU8SaHDhz 00:32:09.091 15:12:54 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:09.091 15:12:54 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:09.091 15:12:54 -- nvmf/common.sh@691 -- # local prefix key digest 00:32:09.091 15:12:54 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:32:09.091 15:12:54 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:32:09.091 15:12:54 -- nvmf/common.sh@693 -- # digest=0 00:32:09.091 15:12:54 -- nvmf/common.sh@694 -- # python - 00:32:09.091 15:12:54 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SUU8SaHDhz 00:32:09.091 15:12:54 -- keyring/common.sh@23 -- # echo /tmp/tmp.SUU8SaHDhz 00:32:09.091 15:12:54 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.SUU8SaHDhz 00:32:09.091 15:12:54 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:09.091 15:12:54 -- keyring/common.sh@15 -- # local name key digest path 00:32:09.091 15:12:54 -- keyring/common.sh@17 -- # name=key1 00:32:09.091 15:12:54 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:09.091 15:12:54 -- keyring/common.sh@17 -- # digest=0 00:32:09.091 15:12:54 -- keyring/common.sh@18 -- # mktemp 00:32:09.091 15:12:54 -- keyring/common.sh@18 -- # path=/tmp/tmp.glk93BAYay 00:32:09.091 15:12:54 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:09.091 15:12:54 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:09.091 15:12:54 -- nvmf/common.sh@691 -- # local prefix key digest 00:32:09.091 15:12:54 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:32:09.091 15:12:54 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:32:09.092 15:12:54 -- nvmf/common.sh@693 -- # digest=0 00:32:09.092 15:12:54 -- nvmf/common.sh@694 -- # python - 00:32:09.092 15:12:54 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.glk93BAYay 00:32:09.092 15:12:54 -- keyring/common.sh@23 -- # echo /tmp/tmp.glk93BAYay 00:32:09.092 15:12:54 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.glk93BAYay 00:32:09.092 15:12:54 -- keyring/file.sh@30 -- # tgtpid=3933751 00:32:09.092 15:12:54 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:09.092 15:12:54 -- keyring/file.sh@32 -- # waitforlisten 3933751 00:32:09.092 15:12:54 -- common/autotest_common.sh@817 -- # '[' -z 3933751 ']' 00:32:09.092 15:12:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.092 15:12:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:09.092 15:12:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.092 15:12:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:09.092 15:12:54 -- common/autotest_common.sh@10 -- # set +x 00:32:09.092 [2024-04-26 15:12:54.586991] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:32:09.092 [2024-04-26 15:12:54.587091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933751 ] 00:32:09.092 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.092 [2024-04-26 15:12:54.621642] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:09.092 [2024-04-26 15:12:54.650756] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.092 [2024-04-26 15:12:54.734424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.351 15:12:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:09.351 15:12:54 -- common/autotest_common.sh@850 -- # return 0 00:32:09.351 15:12:54 -- keyring/file.sh@33 -- # rpc_cmd 00:32:09.351 15:12:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.351 15:12:54 -- common/autotest_common.sh@10 -- # set +x 00:32:09.351 [2024-04-26 15:12:54.964112] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:09.351 null0 00:32:09.351 [2024-04-26 15:12:54.996139] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:09.351 [2024-04-26 15:12:54.996685] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:09.351 [2024-04-26 15:12:55.004141] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:09.351 15:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.351 15:12:55 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:09.351 15:12:55 -- common/autotest_common.sh@638 -- # local es=0 00:32:09.351 15:12:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:09.351 15:12:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:32:09.351 15:12:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:09.351 15:12:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:32:09.351 15:12:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:09.351 15:12:55 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:09.351 15:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.351 15:12:55 -- common/autotest_common.sh@10 -- # set +x 00:32:09.351 [2024-04-26 15:12:55.016163] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:32:09.351 { 00:32:09.351 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:09.351 "secure_channel": false, 00:32:09.351 "listen_address": { 00:32:09.351 "trtype": "tcp", 00:32:09.351 "traddr": "127.0.0.1", 00:32:09.351 "trsvcid": "4420" 00:32:09.351 }, 00:32:09.351 "method": "nvmf_subsystem_add_listener", 00:32:09.351 "req_id": 1 00:32:09.351 } 00:32:09.351 Got JSON-RPC error response 00:32:09.351 response: 00:32:09.351 { 00:32:09.351 "code": -32602, 00:32:09.351 "message": "Invalid parameters" 00:32:09.351 } 00:32:09.351 15:12:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:32:09.351 15:12:55 -- common/autotest_common.sh@641 -- # es=1 00:32:09.351 15:12:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:09.351 15:12:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:09.351 15:12:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:09.351 15:12:55 -- keyring/file.sh@46 -- # bperfpid=3933860 00:32:09.351 15:12:55 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:09.351 15:12:55 -- keyring/file.sh@48 -- # waitforlisten 3933860 /var/tmp/bperf.sock 00:32:09.351 15:12:55 -- common/autotest_common.sh@817 -- # '[' -z 3933860 ']' 00:32:09.351 15:12:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:09.351 15:12:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:09.351 15:12:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:09.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:09.351 15:12:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:09.351 15:12:55 -- common/autotest_common.sh@10 -- # set +x 00:32:09.351 [2024-04-26 15:12:55.063309] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:32:09.351 [2024-04-26 15:12:55.063399] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933860 ] 00:32:09.609 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.609 [2024-04-26 15:12:55.096008] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:09.609 [2024-04-26 15:12:55.126015] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.609 [2024-04-26 15:12:55.214798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.609 15:12:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:09.609 15:12:55 -- common/autotest_common.sh@850 -- # return 0 00:32:09.609 15:12:55 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SUU8SaHDhz 00:32:09.609 15:12:55 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SUU8SaHDhz 00:32:09.867 15:12:55 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.glk93BAYay 00:32:09.867 15:12:55 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.glk93BAYay 00:32:10.125 15:12:55 -- keyring/file.sh@51 -- # get_key key0 00:32:10.125 15:12:55 -- keyring/file.sh@51 -- # jq -r .path 00:32:10.125 15:12:55 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:10.125 15:12:55 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:10.125 15:12:55 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:10.383 15:12:56 -- keyring/file.sh@51 -- # [[ /tmp/tmp.SUU8SaHDhz == \/\t\m\p\/\t\m\p\.\S\U\U\8\S\a\H\D\h\z ]] 00:32:10.383 15:12:56 -- keyring/file.sh@52 -- # get_key key1 00:32:10.383 15:12:56 -- keyring/file.sh@52 -- # jq -r .path 00:32:10.383 15:12:56 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:10.383 15:12:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:10.383 15:12:56 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:10.642 15:12:56 -- keyring/file.sh@52 -- # [[ /tmp/tmp.glk93BAYay == \/\t\m\p\/\t\m\p\.\g\l\k\9\3\B\A\Y\a\y ]] 00:32:10.642 15:12:56 -- keyring/file.sh@53 -- # get_refcnt key0 00:32:10.642 15:12:56 -- keyring/common.sh@12 -- # get_key key0 00:32:10.642 15:12:56 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:10.642 15:12:56 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:10.642 15:12:56 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:10.642 15:12:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:10.900 15:12:56 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:10.900 15:12:56 -- keyring/file.sh@54 -- # get_refcnt key1 00:32:10.900 15:12:56 -- keyring/common.sh@12 -- # get_key key1 00:32:10.900 15:12:56 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:10.900 15:12:56 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:10.900 15:12:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:10.900 15:12:56 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:11.158 15:12:56 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:11.158 15:12:56 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:11.158 15:12:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:11.416 [2024-04-26 15:12:57.009589] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:11.416 nvme0n1 00:32:11.416 15:12:57 -- keyring/file.sh@59 -- # get_refcnt key0 00:32:11.416 15:12:57 -- keyring/common.sh@12 -- # get_key key0 00:32:11.416 15:12:57 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:11.416 15:12:57 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:11.416 15:12:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:11.416 15:12:57 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:11.675 15:12:57 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:11.675 15:12:57 -- keyring/file.sh@60 -- # get_refcnt key1 00:32:11.675 15:12:57 -- keyring/common.sh@12 -- # get_key key1 00:32:11.675 15:12:57 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:11.675 15:12:57 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:11.675 15:12:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:11.675 15:12:57 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:11.933 15:12:57 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:11.933 15:12:57 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:12.191 Running I/O for 1 seconds... 00:32:13.125 00:32:13.125 Latency(us) 00:32:13.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.125 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:13.125 nvme0n1 : 1.02 5768.10 22.53 0.00 0.00 21991.39 12039.21 214375.54 00:32:13.125 =================================================================================================================== 00:32:13.125 Total : 5768.10 22.53 0.00 0.00 21991.39 12039.21 214375.54 00:32:13.125 0 00:32:13.125 15:12:58 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:13.125 15:12:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:13.383 15:12:58 -- keyring/file.sh@65 -- # get_refcnt key0 00:32:13.383 15:12:58 -- keyring/common.sh@12 -- # get_key key0 00:32:13.383 15:12:58 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:13.383 15:12:58 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:13.383 15:12:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:13.383 15:12:58 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:13.641 15:12:59 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:13.641 15:12:59 -- keyring/file.sh@66 -- # get_refcnt key1 00:32:13.641 15:12:59 -- keyring/common.sh@12 -- # get_key key1 00:32:13.641 15:12:59 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:13.641 15:12:59 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:13.641 15:12:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:13.641 15:12:59 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:13.899 15:12:59 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:13.899 15:12:59 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:13.899 15:12:59 -- common/autotest_common.sh@638 -- # local es=0 00:32:13.899 15:12:59 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:13.899 15:12:59 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:32:13.899 15:12:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:13.899 15:12:59 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:32:13.899 15:12:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:13.899 15:12:59 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:13.899 15:12:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:14.158 [2024-04-26 15:12:59.703200] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:14.158 [2024-04-26 15:12:59.703790] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77e910 (107): Transport endpoint is not connected 00:32:14.158 [2024-04-26 15:12:59.704779] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77e910 (9): Bad file descriptor 00:32:14.158 [2024-04-26 15:12:59.705778] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:14.158 [2024-04-26 15:12:59.705801] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:14.158 [2024-04-26 15:12:59.705816] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:14.158 request: 00:32:14.158 { 00:32:14.158 "name": "nvme0", 00:32:14.158 "trtype": "tcp", 00:32:14.158 "traddr": "127.0.0.1", 00:32:14.158 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:14.158 "adrfam": "ipv4", 00:32:14.158 "trsvcid": "4420", 00:32:14.158 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:14.158 "psk": "key1", 00:32:14.158 "method": "bdev_nvme_attach_controller", 00:32:14.158 "req_id": 1 00:32:14.158 } 00:32:14.158 Got JSON-RPC error response 00:32:14.158 response: 00:32:14.158 { 00:32:14.158 "code": -32602, 00:32:14.158 "message": "Invalid parameters" 00:32:14.158 } 00:32:14.158 15:12:59 -- common/autotest_common.sh@641 -- # es=1 00:32:14.158 15:12:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:14.158 15:12:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:14.158 15:12:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:14.158 15:12:59 -- keyring/file.sh@71 -- # get_refcnt key0 00:32:14.158 15:12:59 -- keyring/common.sh@12 -- # get_key key0 00:32:14.158 15:12:59 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:14.158 15:12:59 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:14.158 15:12:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:14.158 15:12:59 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:14.416 15:12:59 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:14.416 15:12:59 -- keyring/file.sh@72 -- # get_refcnt key1 00:32:14.416 15:12:59 -- keyring/common.sh@12 -- # get_key key1 00:32:14.416 15:12:59 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:14.416 15:12:59 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:14.416 15:12:59 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:14.416 15:12:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:14.675 15:13:00 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:14.675 15:13:00 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:14.675 15:13:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:14.933 15:13:00 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:14.933 15:13:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:15.192 15:13:00 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:15.192 15:13:00 -- keyring/file.sh@77 -- # jq length 00:32:15.192 15:13:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:15.450 15:13:00 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:15.450 15:13:00 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.SUU8SaHDhz 00:32:15.450 15:13:00 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.SUU8SaHDhz 00:32:15.450 15:13:00 -- common/autotest_common.sh@638 -- # local es=0 00:32:15.450 15:13:00 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.SUU8SaHDhz 00:32:15.450 15:13:00 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:32:15.450 15:13:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:15.450 15:13:00 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:32:15.450 15:13:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:15.450 15:13:00 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SUU8SaHDhz 00:32:15.450 15:13:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SUU8SaHDhz 00:32:15.709 [2024-04-26 15:13:01.191666] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.SUU8SaHDhz': 0100660 00:32:15.709 [2024-04-26 15:13:01.191713] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:15.709 request: 00:32:15.709 { 00:32:15.709 "name": "key0", 00:32:15.709 "path": "/tmp/tmp.SUU8SaHDhz", 00:32:15.709 "method": "keyring_file_add_key", 00:32:15.709 "req_id": 1 00:32:15.709 } 00:32:15.709 Got JSON-RPC error response 00:32:15.709 response: 00:32:15.709 { 00:32:15.709 "code": -1, 00:32:15.709 "message": "Operation not permitted" 00:32:15.709 } 00:32:15.709 15:13:01 -- common/autotest_common.sh@641 -- # es=1 00:32:15.709 15:13:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:15.709 15:13:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:15.709 15:13:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:15.709 15:13:01 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.SUU8SaHDhz 00:32:15.709 15:13:01 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SUU8SaHDhz 00:32:15.709 15:13:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SUU8SaHDhz 00:32:15.967 15:13:01 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.SUU8SaHDhz 00:32:15.967 15:13:01 -- keyring/file.sh@88 -- # get_refcnt key0 00:32:15.967 15:13:01 -- keyring/common.sh@12 -- # get_key key0 00:32:15.967 15:13:01 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:15.967 15:13:01 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:15.967 15:13:01 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:15.967 15:13:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:15.967 15:13:01 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:15.967 15:13:01 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:15.967 15:13:01 -- common/autotest_common.sh@638 -- # local es=0 00:32:15.967 15:13:01 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:15.967 15:13:01 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:32:16.226 15:13:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:16.226 15:13:01 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:32:16.226 15:13:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:16.226 15:13:01 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:16.226 15:13:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:16.226 [2024-04-26 15:13:01.945699] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.SUU8SaHDhz': No such file or directory 00:32:16.226 [2024-04-26 15:13:01.945742] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:16.226 [2024-04-26 15:13:01.945784] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:16.226 [2024-04-26 15:13:01.945795] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:16.226 [2024-04-26 15:13:01.945808] bdev_nvme.c:6208:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:16.226 request: 00:32:16.226 { 00:32:16.226 "name": "nvme0", 00:32:16.226 "trtype": "tcp", 00:32:16.226 "traddr": "127.0.0.1", 00:32:16.226 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:16.226 "adrfam": "ipv4", 00:32:16.226 "trsvcid": "4420", 00:32:16.226 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:16.226 "psk": "key0", 00:32:16.226 "method": "bdev_nvme_attach_controller", 00:32:16.226 "req_id": 1 00:32:16.226 } 00:32:16.226 Got JSON-RPC error response 00:32:16.226 response: 00:32:16.226 { 00:32:16.226 "code": -19, 00:32:16.226 "message": "No such device" 00:32:16.226 } 00:32:16.226 15:13:01 -- common/autotest_common.sh@641 -- # es=1 00:32:16.226 15:13:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:16.226 15:13:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:16.226 15:13:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:16.226 15:13:01 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:16.226 15:13:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:16.484 15:13:02 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:16.484 15:13:02 -- keyring/common.sh@15 -- # local name key digest path 00:32:16.484 15:13:02 -- keyring/common.sh@17 -- # name=key0 00:32:16.484 15:13:02 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:16.484 15:13:02 -- keyring/common.sh@17 -- # digest=0 00:32:16.484 15:13:02 -- keyring/common.sh@18 -- # mktemp 00:32:16.484 15:13:02 -- keyring/common.sh@18 -- # path=/tmp/tmp.laQmZbXb3t 00:32:16.484 15:13:02 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:16.484 15:13:02 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:16.484 15:13:02 -- nvmf/common.sh@691 -- # local prefix key digest 00:32:16.484 15:13:02 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:32:16.484 15:13:02 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:32:16.484 15:13:02 -- nvmf/common.sh@693 -- # digest=0 00:32:16.484 15:13:02 -- nvmf/common.sh@694 -- # python - 00:32:16.741 15:13:02 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.laQmZbXb3t 00:32:16.742 15:13:02 -- keyring/common.sh@23 -- # echo /tmp/tmp.laQmZbXb3t 00:32:16.742 15:13:02 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.laQmZbXb3t 00:32:16.742 15:13:02 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.laQmZbXb3t 00:32:16.742 15:13:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.laQmZbXb3t 00:32:16.742 15:13:02 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:17.000 15:13:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:17.258 nvme0n1 00:32:17.258 15:13:02 -- keyring/file.sh@99 -- # get_refcnt key0 00:32:17.258 15:13:02 -- keyring/common.sh@12 -- # get_key key0 00:32:17.258 15:13:02 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:17.258 15:13:02 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:17.258 15:13:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.258 15:13:02 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:17.516 15:13:03 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:17.516 15:13:03 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:17.516 15:13:03 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:17.773 15:13:03 -- keyring/file.sh@101 -- # get_key key0 00:32:17.773 15:13:03 -- keyring/file.sh@101 -- # jq -r .removed 00:32:17.773 15:13:03 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:17.773 15:13:03 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.773 15:13:03 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:18.031 15:13:03 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:18.031 15:13:03 -- keyring/file.sh@102 -- # get_refcnt key0 00:32:18.031 15:13:03 -- keyring/common.sh@12 -- # get_key key0 00:32:18.031 15:13:03 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:18.031 15:13:03 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:18.031 15:13:03 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:18.031 15:13:03 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:18.289 15:13:03 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:18.289 15:13:03 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:18.289 15:13:03 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:18.289 15:13:04 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:18.289 15:13:04 -- keyring/file.sh@104 -- # jq length 00:32:18.289 15:13:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:18.547 15:13:04 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:18.547 15:13:04 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.laQmZbXb3t 00:32:18.547 15:13:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.laQmZbXb3t 00:32:18.804 15:13:04 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.glk93BAYay 00:32:18.804 15:13:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.glk93BAYay 00:32:19.063 15:13:04 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:19.063 15:13:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:19.321 nvme0n1 00:32:19.321 15:13:05 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:19.321 15:13:05 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:19.888 15:13:05 -- keyring/file.sh@112 -- # config='{ 00:32:19.888 "subsystems": [ 00:32:19.888 { 00:32:19.888 "subsystem": "keyring", 00:32:19.888 "config": [ 00:32:19.888 { 00:32:19.888 "method": "keyring_file_add_key", 00:32:19.888 "params": { 00:32:19.888 "name": "key0", 00:32:19.888 "path": "/tmp/tmp.laQmZbXb3t" 00:32:19.888 } 00:32:19.888 }, 00:32:19.888 { 00:32:19.888 "method": "keyring_file_add_key", 00:32:19.888 "params": { 00:32:19.888 "name": "key1", 00:32:19.888 "path": "/tmp/tmp.glk93BAYay" 00:32:19.888 } 00:32:19.888 } 00:32:19.888 ] 00:32:19.888 }, 00:32:19.888 { 00:32:19.888 "subsystem": "iobuf", 00:32:19.888 "config": [ 00:32:19.888 { 00:32:19.888 "method": "iobuf_set_options", 00:32:19.888 "params": { 00:32:19.888 "small_pool_count": 8192, 00:32:19.888 "large_pool_count": 1024, 00:32:19.888 "small_bufsize": 8192, 00:32:19.888 "large_bufsize": 135168 00:32:19.888 } 00:32:19.888 } 00:32:19.888 ] 00:32:19.888 }, 00:32:19.888 { 00:32:19.888 "subsystem": "sock", 00:32:19.888 "config": [ 00:32:19.888 { 00:32:19.888 "method": "sock_impl_set_options", 00:32:19.888 "params": { 00:32:19.888 "impl_name": "posix", 00:32:19.888 "recv_buf_size": 2097152, 00:32:19.888 "send_buf_size": 2097152, 00:32:19.888 "enable_recv_pipe": true, 00:32:19.888 "enable_quickack": false, 00:32:19.888 "enable_placement_id": 0, 00:32:19.888 "enable_zerocopy_send_server": true, 00:32:19.888 "enable_zerocopy_send_client": false, 00:32:19.888 "zerocopy_threshold": 0, 00:32:19.888 "tls_version": 0, 00:32:19.888 "enable_ktls": false 00:32:19.888 } 00:32:19.888 }, 00:32:19.888 { 00:32:19.888 "method": "sock_impl_set_options", 00:32:19.888 "params": { 00:32:19.888 "impl_name": "ssl", 00:32:19.888 "recv_buf_size": 4096, 00:32:19.888 "send_buf_size": 4096, 00:32:19.888 "enable_recv_pipe": true, 00:32:19.888 "enable_quickack": false, 00:32:19.888 "enable_placement_id": 0, 00:32:19.888 "enable_zerocopy_send_server": true, 00:32:19.888 "enable_zerocopy_send_client": false, 00:32:19.888 "zerocopy_threshold": 0, 00:32:19.888 "tls_version": 0, 00:32:19.888 "enable_ktls": false 00:32:19.888 } 00:32:19.888 } 00:32:19.888 ] 00:32:19.888 }, 00:32:19.888 { 00:32:19.888 "subsystem": "vmd", 00:32:19.888 "config": [] 00:32:19.888 }, 00:32:19.888 { 00:32:19.888 "subsystem": "accel", 00:32:19.888 "config": [ 00:32:19.888 { 00:32:19.888 "method": "accel_set_options", 00:32:19.888 "params": { 00:32:19.888 "small_cache_size": 128, 00:32:19.888 "large_cache_size": 16, 00:32:19.888 "task_count": 2048, 00:32:19.888 "sequence_count": 2048, 00:32:19.888 "buf_count": 2048 00:32:19.888 } 00:32:19.888 } 00:32:19.888 ] 00:32:19.888 }, 00:32:19.888 { 00:32:19.888 "subsystem": "bdev", 00:32:19.888 "config": [ 00:32:19.888 { 00:32:19.888 "method": "bdev_set_options", 00:32:19.888 "params": { 00:32:19.888 "bdev_io_pool_size": 65535, 00:32:19.888 "bdev_io_cache_size": 256, 00:32:19.888 "bdev_auto_examine": true, 00:32:19.888 "iobuf_small_cache_size": 128, 00:32:19.888 "iobuf_large_cache_size": 16 00:32:19.888 } 00:32:19.888 }, 00:32:19.888 { 00:32:19.888 "method": "bdev_raid_set_options", 00:32:19.888 "params": { 00:32:19.888 "process_window_size_kb": 1024 00:32:19.888 } 00:32:19.888 }, 00:32:19.888 { 00:32:19.888 "method": "bdev_iscsi_set_options", 00:32:19.888 "params": { 00:32:19.888 "timeout_sec": 30 00:32:19.888 } 00:32:19.888 }, 00:32:19.888 { 00:32:19.888 "method": "bdev_nvme_set_options", 00:32:19.888 "params": { 00:32:19.888 "action_on_timeout": "none", 00:32:19.888 "timeout_us": 0, 00:32:19.888 "timeout_admin_us": 0, 00:32:19.888 "keep_alive_timeout_ms": 10000, 00:32:19.888 "arbitration_burst": 0, 00:32:19.888 "low_priority_weight": 0, 00:32:19.888 "medium_priority_weight": 0, 00:32:19.888 "high_priority_weight": 0, 00:32:19.888 "nvme_adminq_poll_period_us": 10000, 00:32:19.888 "nvme_ioq_poll_period_us": 0, 00:32:19.888 "io_queue_requests": 512, 00:32:19.888 "delay_cmd_submit": true, 00:32:19.888 "transport_retry_count": 4, 00:32:19.888 "bdev_retry_count": 3, 00:32:19.888 "transport_ack_timeout": 0, 00:32:19.888 "ctrlr_loss_timeout_sec": 0, 00:32:19.888 "reconnect_delay_sec": 0, 00:32:19.888 "fast_io_fail_timeout_sec": 0, 00:32:19.888 "disable_auto_failback": false, 00:32:19.888 "generate_uuids": false, 00:32:19.888 "transport_tos": 0, 00:32:19.888 "nvme_error_stat": false, 00:32:19.888 "rdma_srq_size": 0, 00:32:19.888 "io_path_stat": false, 00:32:19.888 "allow_accel_sequence": false, 00:32:19.888 "rdma_max_cq_size": 0, 00:32:19.888 "rdma_cm_event_timeout_ms": 0, 00:32:19.888 "dhchap_digests": [ 00:32:19.888 "sha256", 00:32:19.888 "sha384", 00:32:19.888 "sha512" 00:32:19.888 ], 00:32:19.888 "dhchap_dhgroups": [ 00:32:19.888 "null", 00:32:19.888 "ffdhe2048", 00:32:19.888 "ffdhe3072", 00:32:19.888 "ffdhe4096", 00:32:19.888 "ffdhe6144", 00:32:19.888 "ffdhe8192" 00:32:19.888 ] 00:32:19.888 } 00:32:19.888 }, 00:32:19.888 { 00:32:19.888 "method": "bdev_nvme_attach_controller", 00:32:19.888 "params": { 00:32:19.888 "name": "nvme0", 00:32:19.888 "trtype": "TCP", 00:32:19.888 "adrfam": "IPv4", 00:32:19.888 "traddr": "127.0.0.1", 00:32:19.888 "trsvcid": "4420", 00:32:19.888 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:19.888 "prchk_reftag": false, 00:32:19.888 "prchk_guard": false, 00:32:19.888 "ctrlr_loss_timeout_sec": 0, 00:32:19.888 "reconnect_delay_sec": 0, 00:32:19.888 "fast_io_fail_timeout_sec": 0, 00:32:19.888 "psk": "key0", 00:32:19.888 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:19.888 "hdgst": false, 00:32:19.888 "ddgst": false 00:32:19.888 } 00:32:19.888 }, 00:32:19.888 { 00:32:19.888 "method": "bdev_nvme_set_hotplug", 00:32:19.888 "params": { 00:32:19.888 "period_us": 100000, 00:32:19.888 "enable": false 00:32:19.888 } 00:32:19.888 }, 00:32:19.888 { 00:32:19.888 "method": "bdev_wait_for_examine" 00:32:19.888 } 00:32:19.888 ] 00:32:19.888 }, 00:32:19.888 { 00:32:19.888 "subsystem": "nbd", 00:32:19.888 "config": [] 00:32:19.888 } 00:32:19.888 ] 00:32:19.888 }' 00:32:19.888 15:13:05 -- keyring/file.sh@114 -- # killprocess 3933860 00:32:19.888 15:13:05 -- common/autotest_common.sh@936 -- # '[' -z 3933860 ']' 00:32:19.888 15:13:05 -- common/autotest_common.sh@940 -- # kill -0 3933860 00:32:19.888 15:13:05 -- common/autotest_common.sh@941 -- # uname 00:32:19.888 15:13:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:19.888 15:13:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3933860 00:32:19.888 15:13:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:19.888 15:13:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:19.888 15:13:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3933860' 00:32:19.888 killing process with pid 3933860 00:32:19.888 15:13:05 -- common/autotest_common.sh@955 -- # kill 3933860 00:32:19.888 Received shutdown signal, test time was about 1.000000 seconds 00:32:19.888 00:32:19.888 Latency(us) 00:32:19.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.888 =================================================================================================================== 00:32:19.889 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:19.889 15:13:05 -- common/autotest_common.sh@960 -- # wait 3933860 00:32:19.889 15:13:05 -- keyring/file.sh@117 -- # bperfpid=3935193 00:32:19.889 15:13:05 -- keyring/file.sh@119 -- # waitforlisten 3935193 /var/tmp/bperf.sock 00:32:19.889 15:13:05 -- common/autotest_common.sh@817 -- # '[' -z 3935193 ']' 00:32:19.889 15:13:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:19.889 15:13:05 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:19.889 15:13:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:19.889 15:13:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:19.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:19.889 15:13:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:19.889 15:13:05 -- common/autotest_common.sh@10 -- # set +x 00:32:19.889 15:13:05 -- keyring/file.sh@115 -- # echo '{ 00:32:19.889 "subsystems": [ 00:32:19.889 { 00:32:19.889 "subsystem": "keyring", 00:32:19.889 "config": [ 00:32:19.889 { 00:32:19.889 "method": "keyring_file_add_key", 00:32:19.889 "params": { 00:32:19.889 "name": "key0", 00:32:19.889 "path": "/tmp/tmp.laQmZbXb3t" 00:32:19.889 } 00:32:19.889 }, 00:32:19.889 { 00:32:19.889 "method": "keyring_file_add_key", 00:32:19.889 "params": { 00:32:19.889 "name": "key1", 00:32:19.889 "path": "/tmp/tmp.glk93BAYay" 00:32:19.889 } 00:32:19.889 } 00:32:19.889 ] 00:32:19.889 }, 00:32:19.889 { 00:32:19.889 "subsystem": "iobuf", 00:32:19.889 "config": [ 00:32:19.889 { 00:32:19.889 "method": "iobuf_set_options", 00:32:19.889 "params": { 00:32:19.889 "small_pool_count": 8192, 00:32:19.889 "large_pool_count": 1024, 00:32:19.889 "small_bufsize": 8192, 00:32:19.889 "large_bufsize": 135168 00:32:19.889 } 00:32:19.889 } 00:32:19.889 ] 00:32:19.889 }, 00:32:19.889 { 00:32:19.889 "subsystem": "sock", 00:32:19.889 "config": [ 00:32:19.889 { 00:32:19.889 "method": "sock_impl_set_options", 00:32:19.889 "params": { 00:32:19.889 "impl_name": "posix", 00:32:19.889 "recv_buf_size": 2097152, 00:32:19.889 "send_buf_size": 2097152, 00:32:19.889 "enable_recv_pipe": true, 00:32:19.889 "enable_quickack": false, 00:32:19.889 "enable_placement_id": 0, 00:32:19.889 "enable_zerocopy_send_server": true, 00:32:19.889 "enable_zerocopy_send_client": false, 00:32:19.889 "zerocopy_threshold": 0, 00:32:19.889 "tls_version": 0, 00:32:19.889 "enable_ktls": false 00:32:19.889 } 00:32:19.889 }, 00:32:19.889 { 00:32:19.889 "method": "sock_impl_set_options", 00:32:19.889 "params": { 00:32:19.889 "impl_name": "ssl", 00:32:19.889 "recv_buf_size": 4096, 00:32:19.889 "send_buf_size": 4096, 00:32:19.889 "enable_recv_pipe": true, 00:32:19.889 "enable_quickack": false, 00:32:19.889 "enable_placement_id": 0, 00:32:19.889 "enable_zerocopy_send_server": true, 00:32:19.889 "enable_zerocopy_send_client": false, 00:32:19.889 "zerocopy_threshold": 0, 00:32:19.889 "tls_version": 0, 00:32:19.889 "enable_ktls": false 00:32:19.889 } 00:32:19.889 } 00:32:19.889 ] 00:32:19.889 }, 00:32:19.889 { 00:32:19.889 "subsystem": "vmd", 00:32:19.889 "config": [] 00:32:19.889 }, 00:32:19.889 { 00:32:19.889 "subsystem": "accel", 00:32:19.889 "config": [ 00:32:19.889 { 00:32:19.889 "method": "accel_set_options", 00:32:19.889 "params": { 00:32:19.889 "small_cache_size": 128, 00:32:19.889 "large_cache_size": 16, 00:32:19.889 "task_count": 2048, 00:32:19.889 "sequence_count": 2048, 00:32:19.889 "buf_count": 2048 00:32:19.889 } 00:32:19.889 } 00:32:19.889 ] 00:32:19.889 }, 00:32:19.889 { 00:32:19.889 "subsystem": "bdev", 00:32:19.889 "config": [ 00:32:19.889 { 00:32:19.889 "method": "bdev_set_options", 00:32:19.889 "params": { 00:32:19.889 "bdev_io_pool_size": 65535, 00:32:19.889 "bdev_io_cache_size": 256, 00:32:19.889 "bdev_auto_examine": true, 00:32:19.889 "iobuf_small_cache_size": 128, 00:32:19.889 "iobuf_large_cache_size": 16 00:32:19.889 } 00:32:19.889 }, 00:32:19.889 { 00:32:19.889 "method": "bdev_raid_set_options", 00:32:19.889 "params": { 00:32:19.889 "process_window_size_kb": 1024 00:32:19.889 } 00:32:19.889 }, 00:32:19.889 { 00:32:19.889 "method": "bdev_iscsi_set_options", 00:32:19.889 "params": { 00:32:19.889 "timeout_sec": 30 00:32:19.889 } 00:32:19.889 }, 00:32:19.889 { 00:32:19.889 "method": "bdev_nvme_set_options", 00:32:19.889 "params": { 00:32:19.889 "action_on_timeout": "none", 00:32:19.889 "timeout_us": 0, 00:32:19.889 "timeout_admin_us": 0, 00:32:19.889 "keep_alive_timeout_ms": 10000, 00:32:19.889 "arbitration_burst": 0, 00:32:19.889 "low_priority_weight": 0, 00:32:19.889 "medium_priority_weight": 0, 00:32:19.889 "high_priority_weight": 0, 00:32:19.889 "nvme_adminq_poll_period_us": 10000, 00:32:19.889 "nvme_ioq_poll_period_us": 0, 00:32:19.889 "io_queue_requests": 512, 00:32:19.889 "delay_cmd_submit": true, 00:32:19.889 "transport_retry_count": 4, 00:32:19.889 "bdev_retry_count": 3, 00:32:19.889 "transport_ack_timeout": 0, 00:32:19.889 "ctrlr_loss_timeout_sec": 0, 00:32:19.889 "reconnect_delay_sec": 0, 00:32:19.889 "fast_io_fail_timeout_sec": 0, 00:32:19.889 "disable_auto_failback": false, 00:32:19.889 "generate_uuids": false, 00:32:19.889 "transport_tos": 0, 00:32:19.889 "nvme_error_stat": false, 00:32:19.889 "rdma_srq_size": 0, 00:32:19.889 "io_path_stat": false, 00:32:19.889 "allow_accel_sequence": false, 00:32:19.889 "rdma_max_cq_size": 0, 00:32:19.889 "rdma_cm_event_timeout_ms": 0, 00:32:19.889 "dhchap_digests": [ 00:32:19.889 "sha256", 00:32:19.889 "sha384", 00:32:19.889 "sha512" 00:32:19.889 ], 00:32:19.889 "dhchap_dhgroups": [ 00:32:19.889 "null", 00:32:19.889 "ffdhe2048", 00:32:19.889 "ffdhe3072", 00:32:19.889 "ffdhe4096", 00:32:19.889 "ffdhe6144", 00:32:19.889 "ffdhe8192" 00:32:19.889 ] 00:32:19.889 } 00:32:19.889 }, 00:32:19.889 { 00:32:19.889 "method": "bdev_nvme_attach_controller", 00:32:19.889 "params": { 00:32:19.889 "name": "nvme0", 00:32:19.889 "trtype": "TCP", 00:32:19.889 "adrfam": "IPv4", 00:32:19.889 "traddr": "127.0.0.1", 00:32:19.889 "trsvcid": "4420", 00:32:19.889 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:19.889 "prchk_reftag": false, 00:32:19.889 "prchk_guard": false, 00:32:19.889 "ctrlr_loss_timeout_sec": 0, 00:32:19.889 "reconnect_delay_sec": 0, 00:32:19.889 "fast_io_fail_timeout_sec": 0, 00:32:19.889 "psk": "key0", 00:32:19.889 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:19.889 "hdgst": false, 00:32:19.889 "ddgst": false 00:32:19.889 } 00:32:19.889 }, 00:32:19.889 { 00:32:19.889 "method": "bdev_nvme_set_hotplug", 00:32:19.889 "params": { 00:32:19.889 "period_us": 100000, 00:32:19.889 "enable": false 00:32:19.889 } 00:32:19.889 }, 00:32:19.889 { 00:32:19.889 "method": "bdev_wait_for_examine" 00:32:19.889 } 00:32:19.889 ] 00:32:19.889 }, 00:32:19.889 { 00:32:19.889 "subsystem": "nbd", 00:32:19.889 "config": [] 00:32:19.889 } 00:32:19.889 ] 00:32:19.889 }' 00:32:19.889 [2024-04-26 15:13:05.616574] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 24.07.0-rc0 initialization... 00:32:19.889 [2024-04-26 15:13:05.616648] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3935193 ] 00:32:20.148 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.148 [2024-04-26 15:13:05.646775] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:20.148 [2024-04-26 15:13:05.674411] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.148 [2024-04-26 15:13:05.762336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.406 [2024-04-26 15:13:05.938942] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:20.972 15:13:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:20.972 15:13:06 -- common/autotest_common.sh@850 -- # return 0 00:32:20.972 15:13:06 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:20.972 15:13:06 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:20.972 15:13:06 -- keyring/file.sh@120 -- # jq length 00:32:21.230 15:13:06 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:21.230 15:13:06 -- keyring/file.sh@121 -- # get_refcnt key0 00:32:21.230 15:13:06 -- keyring/common.sh@12 -- # get_key key0 00:32:21.230 15:13:06 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:21.230 15:13:06 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.230 15:13:06 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.230 15:13:06 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:21.490 15:13:07 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:21.490 15:13:07 -- keyring/file.sh@122 -- # get_refcnt key1 00:32:21.490 15:13:07 -- keyring/common.sh@12 -- # get_key key1 00:32:21.490 15:13:07 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:21.490 15:13:07 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.490 15:13:07 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.490 15:13:07 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:21.780 15:13:07 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:21.780 15:13:07 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:21.780 15:13:07 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:21.780 15:13:07 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:22.038 15:13:07 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:22.038 15:13:07 -- keyring/file.sh@1 -- # cleanup 00:32:22.038 15:13:07 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.laQmZbXb3t /tmp/tmp.glk93BAYay 00:32:22.038 15:13:07 -- keyring/file.sh@20 -- # killprocess 3935193 00:32:22.038 15:13:07 -- common/autotest_common.sh@936 -- # '[' -z 3935193 ']' 00:32:22.038 15:13:07 -- common/autotest_common.sh@940 -- # kill -0 3935193 00:32:22.038 15:13:07 -- common/autotest_common.sh@941 -- # uname 00:32:22.038 15:13:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:22.038 15:13:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3935193 00:32:22.038 15:13:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:22.038 15:13:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:22.038 15:13:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3935193' 00:32:22.038 killing process with pid 3935193 00:32:22.038 15:13:07 -- common/autotest_common.sh@955 -- # kill 3935193 00:32:22.038 Received shutdown signal, test time was about 1.000000 seconds 00:32:22.038 00:32:22.038 Latency(us) 00:32:22.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.039 =================================================================================================================== 00:32:22.039 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:22.039 15:13:07 -- common/autotest_common.sh@960 -- # wait 3935193 00:32:22.296 15:13:07 -- keyring/file.sh@21 -- # killprocess 3933751 00:32:22.296 15:13:07 -- common/autotest_common.sh@936 -- # '[' -z 3933751 ']' 00:32:22.296 15:13:07 -- common/autotest_common.sh@940 -- # kill -0 3933751 00:32:22.296 15:13:07 -- common/autotest_common.sh@941 -- # uname 00:32:22.296 15:13:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:22.296 15:13:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3933751 00:32:22.296 15:13:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:22.296 15:13:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:22.296 15:13:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3933751' 00:32:22.296 killing process with pid 3933751 00:32:22.296 15:13:07 -- common/autotest_common.sh@955 -- # kill 3933751 00:32:22.296 [2024-04-26 15:13:07.832553] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:22.296 15:13:07 -- common/autotest_common.sh@960 -- # wait 3933751 00:32:22.554 00:32:22.554 real 0m13.850s 00:32:22.554 user 0m34.776s 00:32:22.554 sys 0m3.134s 00:32:22.554 15:13:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:22.554 15:13:08 -- common/autotest_common.sh@10 -- # set +x 00:32:22.554 ************************************ 00:32:22.554 END TEST keyring_file 00:32:22.554 ************************************ 00:32:22.554 15:13:08 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:32:22.554 15:13:08 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:32:22.554 15:13:08 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:32:22.555 15:13:08 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:32:22.555 15:13:08 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:22.555 15:13:08 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:32:22.555 15:13:08 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:22.555 15:13:08 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:32:22.555 15:13:08 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:32:22.555 15:13:08 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:32:22.555 15:13:08 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:22.555 15:13:08 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:32:22.555 15:13:08 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:32:22.555 15:13:08 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:32:22.555 15:13:08 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:32:22.555 15:13:08 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:32:22.555 15:13:08 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:32:22.555 15:13:08 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:32:22.555 15:13:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:22.555 15:13:08 -- common/autotest_common.sh@10 -- # set +x 00:32:22.555 15:13:08 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:32:22.555 15:13:08 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:32:22.555 15:13:08 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:32:22.555 15:13:08 -- common/autotest_common.sh@10 -- # set +x 00:32:24.456 INFO: APP EXITING 00:32:24.456 INFO: killing all VMs 00:32:24.456 INFO: killing vhost app 00:32:24.456 INFO: EXIT DONE 00:32:25.389 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:32:25.389 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:25.389 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:25.648 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:25.648 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:25.648 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:25.648 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:25.648 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:25.648 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:25.648 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:25.648 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:25.648 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:25.648 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:25.648 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:25.648 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:25.648 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:25.648 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:27.022 Cleaning 00:32:27.022 Removing: /var/run/dpdk/spdk0/config 00:32:27.022 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:27.022 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:27.022 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:27.022 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:27.022 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:27.022 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:27.022 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:27.022 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:27.022 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:27.022 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:27.022 Removing: /var/run/dpdk/spdk1/config 00:32:27.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:27.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:27.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:27.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:27.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:27.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:27.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:27.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:27.023 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:27.023 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:27.023 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:27.023 Removing: /var/run/dpdk/spdk2/config 00:32:27.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:27.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:27.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:27.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:27.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:27.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:27.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:27.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:27.023 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:27.023 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:27.023 Removing: /var/run/dpdk/spdk3/config 00:32:27.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:27.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:27.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:27.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:27.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:27.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:27.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:27.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:27.023 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:27.023 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:27.023 Removing: /var/run/dpdk/spdk4/config 00:32:27.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:27.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:27.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:27.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:27.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:27.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:27.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:27.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:27.023 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:27.023 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:27.023 Removing: /dev/shm/bdev_svc_trace.1 00:32:27.023 Removing: /dev/shm/nvmf_trace.0 00:32:27.023 Removing: /dev/shm/spdk_tgt_trace.pid3647312 00:32:27.023 Removing: /var/run/dpdk/spdk0 00:32:27.023 Removing: /var/run/dpdk/spdk1 00:32:27.023 Removing: /var/run/dpdk/spdk2 00:32:27.023 Removing: /var/run/dpdk/spdk3 00:32:27.023 Removing: /var/run/dpdk/spdk4 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3645595 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3646350 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3647312 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3647792 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3648493 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3648631 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3649363 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3649376 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3649632 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3650938 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3651879 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3652192 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3652384 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3652597 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3652801 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3652969 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3653254 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3653440 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3654034 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3656396 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3656575 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3656741 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3656746 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3657182 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3657186 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3657621 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3657633 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3657931 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3657943 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3658111 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3658241 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3658623 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3658788 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3659105 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3659285 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3659329 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3659535 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3659697 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3659978 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3660144 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3660313 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3660588 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3660758 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3660919 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3661200 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3661362 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3661532 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3661807 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3661976 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3662138 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3662420 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3662583 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3662755 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3663031 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3663203 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3663366 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3663648 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3663729 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3664017 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3666155 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3719340 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3721982 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3727859 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3731781 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3734157 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3734561 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3741859 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3741864 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3742520 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3743172 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3743710 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3744115 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3744169 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3744375 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3744502 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3744514 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3745142 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3745709 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3746360 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3746762 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3746768 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3747030 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3747800 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3748643 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3754021 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3754183 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3756845 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3760569 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3763244 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3769645 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3774892 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3776080 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3776746 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3786995 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3789116 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3791919 00:32:27.023 Removing: /var/run/dpdk/spdk_pid3793096 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3794413 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3794518 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3794571 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3794705 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3795143 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3796451 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3797176 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3798059 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3799684 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3800024 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3800585 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3802993 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3806363 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3809840 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3833534 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3836171 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3840089 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3841038 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3842131 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3844710 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3846965 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3851226 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3851230 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3854023 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3854280 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3854418 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3854678 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3854683 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3855762 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3856942 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3858285 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3860031 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3861211 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3862393 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3865954 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3866407 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3867425 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3868011 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3871485 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3873463 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3876889 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3879973 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3884458 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3884460 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3897195 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3897607 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3898131 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3898535 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3899120 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3899530 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3899936 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3900341 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3902861 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3903000 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3906819 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3906994 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3908656 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3913660 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3913665 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3916596 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3917998 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3919403 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3920156 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3921553 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3922494 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3928454 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3928758 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3929152 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3930716 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3931113 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3931395 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3933751 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3933860 00:32:27.282 Removing: /var/run/dpdk/spdk_pid3935193 00:32:27.282 Clean 00:32:27.540 15:13:13 -- common/autotest_common.sh@1437 -- # return 0 00:32:27.541 15:13:13 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:32:27.541 15:13:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:27.541 15:13:13 -- common/autotest_common.sh@10 -- # set +x 00:32:27.541 15:13:13 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:32:27.541 15:13:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:27.541 15:13:13 -- common/autotest_common.sh@10 -- # set +x 00:32:27.541 15:13:13 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:27.541 15:13:13 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:27.541 15:13:13 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:27.541 15:13:13 -- spdk/autotest.sh@389 -- # hash lcov 00:32:27.541 15:13:13 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:27.541 15:13:13 -- spdk/autotest.sh@391 -- # hostname 00:32:27.541 15:13:13 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:27.797 geninfo: WARNING: invalid characters removed from testname! 00:32:59.865 15:13:40 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:59.865 15:13:44 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:02.395 15:13:47 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:04.929 15:13:50 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:08.255 15:13:53 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:10.783 15:13:56 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:14.066 15:13:59 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:14.325 15:13:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:14.325 15:13:59 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:14.325 15:13:59 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:14.325 15:13:59 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:14.325 15:13:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.325 15:13:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.325 15:13:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.325 15:13:59 -- paths/export.sh@5 -- $ export PATH 00:33:14.325 15:13:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.325 15:13:59 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:14.325 15:13:59 -- common/autobuild_common.sh@435 -- $ date +%s 00:33:14.325 15:13:59 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714137239.XXXXXX 00:33:14.325 15:13:59 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714137239.nhpm8x 00:33:14.325 15:13:59 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:33:14.325 15:13:59 -- common/autobuild_common.sh@441 -- $ '[' -n main ']' 00:33:14.325 15:13:59 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:33:14.325 15:13:59 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:33:14.325 15:13:59 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:14.325 15:13:59 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:14.325 15:13:59 -- common/autobuild_common.sh@451 -- $ get_config_params 00:33:14.325 15:13:59 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:33:14.325 15:13:59 -- common/autotest_common.sh@10 -- $ set +x 00:33:14.325 15:13:59 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:33:14.325 15:13:59 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:33:14.325 15:13:59 -- pm/common@17 -- $ local monitor 00:33:14.325 15:13:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:14.325 15:13:59 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3945400 00:33:14.325 15:13:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:14.325 15:13:59 -- pm/common@21 -- $ date +%s 00:33:14.325 15:13:59 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3945402 00:33:14.325 15:13:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:14.325 15:13:59 -- pm/common@21 -- $ date +%s 00:33:14.325 15:13:59 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3945405 00:33:14.325 15:13:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:14.325 15:13:59 -- pm/common@21 -- $ date +%s 00:33:14.325 15:13:59 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3945408 00:33:14.325 15:13:59 -- pm/common@26 -- $ sleep 1 00:33:14.325 15:13:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714137239 00:33:14.325 15:13:59 -- pm/common@21 -- $ date +%s 00:33:14.325 15:13:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714137239 00:33:14.325 15:13:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714137239 00:33:14.325 15:13:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714137239 00:33:14.325 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714137239_collect-vmstat.pm.log 00:33:14.325 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714137239_collect-bmc-pm.bmc.pm.log 00:33:14.325 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714137239_collect-cpu-load.pm.log 00:33:14.325 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714137239_collect-cpu-temp.pm.log 00:33:15.262 15:14:00 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:33:15.262 15:14:00 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:33:15.262 15:14:00 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:15.262 15:14:00 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:15.262 15:14:00 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:15.262 15:14:00 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:15.262 15:14:00 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:15.262 15:14:00 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:15.262 15:14:00 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:15.262 15:14:00 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:15.262 15:14:00 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:15.262 15:14:00 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:15.262 15:14:00 -- pm/common@30 -- $ signal_monitor_resources TERM 00:33:15.262 15:14:00 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:33:15.262 15:14:00 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:15.262 15:14:00 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:15.262 15:14:00 -- pm/common@45 -- $ pid=3945415 00:33:15.262 15:14:00 -- pm/common@52 -- $ sudo kill -TERM 3945415 00:33:15.262 15:14:00 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:15.262 15:14:00 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:15.262 15:14:00 -- pm/common@45 -- $ pid=3945416 00:33:15.262 15:14:00 -- pm/common@52 -- $ sudo kill -TERM 3945416 00:33:15.262 15:14:00 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:15.262 15:14:00 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:15.262 15:14:00 -- pm/common@45 -- $ pid=3945417 00:33:15.262 15:14:00 -- pm/common@52 -- $ sudo kill -TERM 3945417 00:33:15.262 15:14:00 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:15.262 15:14:00 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:15.262 15:14:00 -- pm/common@45 -- $ pid=3945418 00:33:15.262 15:14:00 -- pm/common@52 -- $ sudo kill -TERM 3945418 00:33:15.262 + [[ -n 3539708 ]] 00:33:15.262 + sudo kill 3539708 00:33:15.532 [Pipeline] } 00:33:15.549 [Pipeline] // stage 00:33:15.555 [Pipeline] } 00:33:15.575 [Pipeline] // timeout 00:33:15.580 [Pipeline] } 00:33:15.597 [Pipeline] // catchError 00:33:15.603 [Pipeline] } 00:33:15.620 [Pipeline] // wrap 00:33:15.627 [Pipeline] } 00:33:15.643 [Pipeline] // catchError 00:33:15.653 [Pipeline] stage 00:33:15.656 [Pipeline] { (Epilogue) 00:33:15.673 [Pipeline] catchError 00:33:15.675 [Pipeline] { 00:33:15.689 [Pipeline] echo 00:33:15.690 Cleanup processes 00:33:15.696 [Pipeline] sh 00:33:15.979 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:15.979 3945544 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:15.979 3945680 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:15.993 [Pipeline] sh 00:33:16.275 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:16.275 ++ grep -v 'sudo pgrep' 00:33:16.275 ++ awk '{print $1}' 00:33:16.275 + sudo kill -9 3945544 00:33:16.292 [Pipeline] sh 00:33:16.593 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:24.755 [Pipeline] sh 00:33:25.037 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:25.037 Artifacts sizes are good 00:33:25.049 [Pipeline] archiveArtifacts 00:33:25.055 Archiving artifacts 00:33:25.235 [Pipeline] sh 00:33:25.512 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:25.525 [Pipeline] cleanWs 00:33:25.535 [WS-CLEANUP] Deleting project workspace... 00:33:25.535 [WS-CLEANUP] Deferred wipeout is used... 00:33:25.541 [WS-CLEANUP] done 00:33:25.542 [Pipeline] } 00:33:25.563 [Pipeline] // catchError 00:33:25.571 [Pipeline] sh 00:33:25.847 + logger -p user.info -t JENKINS-CI 00:33:25.856 [Pipeline] } 00:33:25.873 [Pipeline] // stage 00:33:25.878 [Pipeline] } 00:33:25.896 [Pipeline] // node 00:33:25.901 [Pipeline] End of Pipeline 00:33:25.949 Finished: SUCCESS